S
S
Sergey Sokolov2020-02-21 17:09:58
Neural networks
Sergey Sokolov, 2020-02-21 17:09:58

Are negative examples necessary when classifying images?

There is an EfficientNet network trained on a large ImageNet.
The task is to retrain the network to classify images into 3 of its classes.

There is an example with retraining for the classification of "cats / dogs".

But images will also come across that do not contain any of the 3 given identifiable objects - random images.

Question: when retraining the network, should it be trained in 4 classes by adding images that do not contain the desired objects?

Initially, I was given a network trained in this way in 4 classes, including the 4th class “NO”. The output is 4 probabilities. The maximum is taken from them. Photo folders were collected for each of the 3 classes and a 4th folder with random pictures that did not contain any of the 3 objects.

I doubt that it is necessary to teach in the "NO" class.
What to do if you take only 3 classes - set a certain threshold, and if there is no confidence for any class above, say, 90%, then consider that the picture does not contain what you are looking for?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
Danil, 2020-02-21
@DanilBaibak

The answer depends on how the network will work with what images in production.
The option when you train on 3 classes and add logic that checks the probabilities (if the probability for all 3 classes is below threshold , then the 4th class) is quite working.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question