Answer the question
In order to leave comments, you need to log in
Does the neural network give a different answer for different initial weights?
What can be said in a multilayer parceptron using the backpropagation method, when the result of running a working example fluctuates quite a lot. And only the random weights change. But after all, in theory, they should strive for some values after a hundred epochs.
Found the Perceptron Convergence Theorem. I have been analyzing the implementation for 2 days, what could be the reasons.
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question