R
R
rPman2017-02-04 20:43:09
Neural networks
rPman, 2017-02-04 20:43:09

What training algorithms can be used, with a non-standard error function, that do not directly use the outputs of the training data?

I'm just diving into the question, I apologize in advance for being noob.
In my task there is a very large amount of input data but there are no ready-made correct outputs, the maximum is approximate, my task is to classify this data, but not into some abstract classes that an independent classifier network can allocate, but into something else. I can determine the correct (from the point of view of my task) classes using a fairly heavy function that uses a large amount of data and a ready-made neural network (many times), this function can be used as an error function, as a result, I need to minimize its return value.
As a subtask, I will need a separate network that could 'predict' which data generally fall under my classification, but the data for it can be prepared on an already prepared network (the data for which my function returns an error above the threshold).
If the task were simple, and the neural network would contain tens-hundreds of weights, I would use the classic multivariate optimization algorithms, calculating the derivatives and Jacobian matrices, they use the output only to estimate the error. Well, my task is a priori complicated and noisy with erroneous data (it’s impossible to separate noise from useful data in advance, there is VERY much data, hundreds of gigabytes)
Perhaps I don’t understand something, and all learning algorithms do not use the output for learning itself, but only in the error function, then I can simply substitute the previous or current 'best' value of the neural network in the algorithm itself for use in the error function.

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question