S
S
SemenAnnigilator2021-09-28 13:24:08
Python
SemenAnnigilator, 2021-09-28 13:24:08

How to convert features and target to the same size?

I have 1000 images and they have a class label as a target feature (one image can have several of them) and 4 values, the coordinates of the bounding box (bounding box) (there can also be several of them).
This is how the dictionary with the values ​​of the target feature looks like:

{'category_id': 183, 'bbox': [0.0, 172.0, 184.0, 349.0]}

The problem is that the target feature has a size of 6512 and the size of the images is 1000 respectively, and here is how do I transform the features so that the dimensions are the same and the model accepts them (because now tensorflow / keras says that the features and the target feature must be the same size ?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
V
Vindicar, 2021-09-28
@SemenAnnigilator

First of all, what do you want to do? Localization on the image of several objects from several categories?
Then you need to think about how you will form the response of the network, because the golden rule for most networks is that the size of the input and the size of the output cannot change on the fly.
If you do not know how many objects you will have, then it is better to train the network with an eye on pixmaps. Roughly speaking, even if it also produces an image (can be reduced), and paints the rectangles on it with different colors corresponding to different categories.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question