Answer the question
In order to leave comments, you need to log in
Is it worth shuffling data to train a neural network during training?
Somewhere I briefly read that for faster convergence, it is desirable to shuffle the input data for the neural network during the learning process. Where can you read about it? Is the game worth the candle?
The second question is, if it is beneficial for the tasks set, then how to do it right?
Answer the question
In order to leave comments, you need to log in
The point is that for each stage you make a random sample from the original dataset. This speeds up the process and avoids overfitting. Read "Deep learning. Dive into the world of neural networks".
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question