Answer the question
In order to leave comments, you need to log in
What is the learning principle of neural networks based on?
Due to what the program approaches the given goal, should it go through the algorithms for achieving the result and assemble itself from the most successful ones?
Answer the question
In order to leave comments, you need to log in
A neural network (both natural and artificial) will essentially represent a function (yes, Y = F(X) only very complex), the output Y of which is some behavior of the subject (or program), and the input X is some imperative information (from sense organs, for example). The essence of learning is to find the optimal value of F(X), which achieves the best adaptation of the subject/program to the task (for living beings, the task is survival). Learning takes place by small iterative steps from less optimal variants of the function F to more optimal ones (and not by enumeration of all possible options). By supplying various values of X to the input F, the teacher (or natural selection) "encourages" options in which F gives more accurate values of Y at the output (better corresponding to the task) and "punishes" for the worst (relative to previous achievements) options. "Encouragement" and "punishment" occurs by (non-sharp) strengthening / weakening of those neural connections that were most involved in the last iteration, that is, they made the greatest contribution to success / failure. Thus, in the course of small successive iterations, the "intelligence" (perhaps even without quotes) of the neural network is gradually sharpened for the task being solved (a simple enumeration would not give such results even for 100500 years).
You may be confusing "neural networks" and "genetic algorithms".
In neural networks, connections between neurons are formed during training.
We have an error function f(x1,...,x_n) (where x1-xn are network parameters (its device, connection weights. Well, let's say the device changes - but you can (simplified) consider the absence of a connection as a connection with zero weight).
Accordingly, the task is to minimize it.
In the case of artificial ones, one of the optimization algorithms (of course, the best results are with those specific to neural networks), which calculates the error value on the input data, in the case of natural ones, it is somewhat more complicated. At a minimum:
- EMNIP, optimization does take place.
- random (at least - in the form of mutations)
- non-survival of most of the unsuccessful specimens.
The learning principle is based on knowledge: on the values of input and output coefficients of all previous generations/cycles/iterations.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question