V
V
Vladbara7052020-12-23 21:40:14
Neural networks
Vladbara705, 2020-12-23 21:40:14

Neural networks and how to understand what I need?

Hello. I started reading about neural networks and a few questions arose.

what role does the number of hidden neurons play?

what role does the number of hidden layers of neurons play?

how to understand how many hidden neurons and hidden layers do I need?

for example, a neural network for determining the brand of a TV. I have a TV:
A) Diagonal: 30 inches Wi-fi: true, ... And other TV settings.
B) diagonal: 40 inches Wi-fi: false
And other TVs ...
Question: can I send several parameters of the first TV and several parameters of the second TV to the neural network to determine the brand of the TV?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
R
rPman, 2020-12-24
@rPman

About the problem with TVs - it is formulated incorrectly and of course it has no solution. If you just need to determine the brand by parameters, then this problem also has no solution, because the number of brands significantly exceeds the number of combinations of all parameters and, for example, a lot of completely identical TVs in terms of parameters are different brands.
--------------
Without regard to the task, I don't think you can find a clear definition. You need to focus on the problem and explore.
The number of hidden neurons affects the complexity/cost of training (exponential)
The number of layers - on how difficult the 'task the network can solve', in the philistine sense, it is somewhere like that, each layer is the identification of features, the first is according to the data itself, the second according to their result, etc., in fact everything is more complicated and not so clear.
The number of neurons in the layer determines how many options / classes at the corresponding level the network can process ... for example, you have ten features and you gave five neurons in the layer, most likely such a network will not be able to converge or will find a dependence through other features, the number of which will fit into these five. If there are more neurons in the layer than necessary, the network may take much longer to converge, and it will also be more likely to retrain, i.e. instead of identifying the principles, it stupidly memorizes the training sample.
The choice of learning algorithm greatly affects the ability to learn, and those, in turn, depend on which network you build. The result also strongly depends on how exactly you build the training sample, how you normalize the input and output values ​​(the network can work not only as a classifier).
99% of the work of developing neural networks is working with data and bringing it to a form convenient for a neural network. The 'better' this work is done, the less resources (time and money) will be spent on training.
wiki

Such an interpretation is rather metaphorical or illustrative. In fact, the "features" generated by a complex network are obscure and difficult to interpret, so much so that in practical systems it is not particularly recommended to try to understand the contents of these features or try to "correct" them, instead it is recommended to improve the structure and architecture of the network itself in order to get better results. Thus, ignoring the system of some significant phenomena may indicate that either there is not enough data for training, or the network structure has flaws, and the system cannot develop effective features for these phenomena.

Disadvantages
Too many variable network parameters; it is not clear for what task and computing power what settings are needed. So, the variable parameters include: the number of layers, the dimension of the convolution kernel for each of the layers, the number of kernels for each of the layers, the kernel shift step when processing the layer, the need for subsampling layers, the degree of reduction in dimension by them, the function to reduce the dimension (selection of the maximum, average, etc.), the transfer function of neurons, the presence and parameters of the output fully connected neural network at the output of the convolutional one. All these parameters significantly affect the result, but are chosen empirically by researchers.. There are several well-established and perfectly working network configurations, but there are not enough recommendations on how to build a network for a new task.

D
dmshar, 2020-12-24
@dmshar

A counter question - what did you "start reading" if you had such questions and you could not find answers to them? Just wondering, in which source (I'm not talking about blog articles, of course, and not about popular magazines for younger students) does not answer these questions?
Judging by the task with TVs, you, among other things, have not even figured out why neural networks are needed in principle.
It is very difficult to arrange an "educational program" on the forum for those who are not in the subject. Therefore, advice - find any serious source, read at least a couple of dozen, or better hundreds of first pages there - and your questions will seem somewhat - to put it mildly - childish to you.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question