S
S
Sergey2012-06-09 19:24:17
Image processing
Sergey, 2012-06-09 19:24:17

What factors affect the accuracy of comparing objects using Hu moments?

There is a system for searching by image in the catalog. At the moment, the difficulty lies in the fact that in addition to the desired results in the sample, there are generally not similar images at all.
The search is performed as follows. There is a base of standards (a base with Hu moments already calculated). When loading an image, Canny is taken, slightly processed and moments are calculated. Then it just sorts by the result of comparing the moments.
Perhaps the option with moments is not the best way to solve the classification problem (we have ~ 1000 different objects that fit into 10 different categories in shape), but I didn’t come up with anything else that would work out fast enough on a sample of 1000 elements (now processing and comparison take a little less than a second).
I tried to apply the approximation, but alas, nothing really happened. The search is no longer invariant with respect to the position of the object in the picture.
Actually, when I experimented, I noticed that if you leave only a clean circuit, the system works worse than if there is something in the circuit. Although, logically, the accuracy of the comparison should increase, but in reality it turned out worse.
In a word, the question is the following. How can one explain that when comparing two subjectively completely different objects, the difference in moments is less than when comparing subjectively more similar objects. How can this be fixed?
I have an assumption that this is affected by the presence of noise inside the contour. It is different in every photo. Although in some cases this bug / feature is completely incomprehensible to me.

Answer the question

In order to leave comments, you need to log in

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question