What is feature learning

Deep learning: when the human brain becomes a role model

If images are to be sorted according to whether they are dogs, cats or people, that is a challenging task for computers. Because what is immediately clear to people when they look at it, the computer first has to understand by analyzing individual image features.

In deep learning, the raw data input, in this case the image, is analyzed layer by layer. In a first layer of an artificial neural network, the system checks, for example, which colors the individual image pixels have. Each image pixel is processed by its own neuron. The following layer identifies edges and shapes and the next layer checks for more complex features.

The information collected is stored in a flexible algorithm pictured. The results of one shift are transported to the next shift and change the algorithm. In this way, the computer is finally able to decide through a multitude of operations whether an image is to be assigned to the category dog, cat or human.

At the beginning there is a training session in which assignment errors are corrected by people. This adjusts the algorithm. After a short time he can improve his image recognition independently. By the Linkage between the neurons of the network changed and the weighting of variables within the algorithm is adjusted, certain input patterns (cat images in various variants) lead to the same output patterns (cat recognized) with less and less error. The more visual material the system has for learning, the better.

With deep learning, people cannot always understand which patterns the computer recognized in order to come to its decisions. Especially since the system has its decision-making rules continuously self-optimized.