Answer the question
In order to leave comments, you need to log in
Are negative examples necessary when classifying images?
There is an EfficientNet network trained on a large ImageNet.
The task is to retrain the network to classify images into 3 of its classes.
There is an example with retraining for the classification of "cats / dogs".
But images will also come across that do not contain any of the 3 given identifiable objects - random images.
Question: when retraining the network, should it be trained in 4 classes by adding images that do not contain the desired objects?
Initially, I was given a network trained in this way in 4 classes, including the 4th class “NO”. The output is 4 probabilities. The maximum is taken from them. Photo folders were collected for each of the 3 classes and a 4th folder with random pictures that did not contain any of the 3 objects.
I doubt that it is necessary to teach in the "NO" class.
What to do if you take only 3 classes - set a certain threshold, and if there is no confidence for any class above, say, 90%, then consider that the picture does not contain what you are looking for?
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question