Hello, in fact, a neural network is a large program that contains many small programs in the network, each of which is triggered by its own signal. The programmer himself trains the neural network. Each small program is a neuron, and a neural network is a network of neurons that are interconnected.
There are several teaching methods. The classic way today is supervised learning. Let's take a closer look at it. Imagine, we want to have a system that, based on a photograph of an object, assigns it to one of the predefined categories (classes). Let's say we have five: a cat, a dog, a tree, a cloud and a house. To train such a system, you need to collect a dataset: for example, 200 images for each class. In total, 1000 examples of objects with known classes. At the next stage, the neural network architecture is chosen. Choosing the right architecture takes experience, intuition, and a bit of luck. For simplicity of further presentation, let's consider a neural network as a “black box” that takes a picture as input, and outputs the class of an object on it. At the very beginning, the neural network does nothing. Learning is an iterative process. One iteration is a set of steps: 1) Randomly select n pictures. 2) Pass the pictures through the neural network. 3) We know what answers should be for each picture and what answers the network produced. Based on this information, the learning algorithm changes the parameters of the neural network to make it work more accurately. The iteration is performed many times. Sooner or later, the error between the predictions of the neural network and the correct answers will be minimal, and then the network is considered trained. Either this does not happen, and then the network architecture, training algorithm or data must be changed.