Answer the question
In order to leave comments, you need to log in
Comma-separated data search?
Good afternoon, tell me how to deal with such data.
There are many fields that contain 1,2,3,4,5,6 etc. these fields refer to the user profile.
I was thinking
1 It is possible to spread such fields on different plates, but there are a lot of such fields. - This option is no longer available
2 You can create 3 tables Group, Fields, Values. But the 3rd table will grow like mushrooms exponentially.
3 Or leave it as it is and use sphinx to search for profiles only replace 1,2,3,4,5 with head body symbols, etc.
Which option would be better in terms of performance, search, and inference convenience?
Answer the question
In order to leave comments, you need to log in
That is, you think that if the main table consists of millions of rows, and each row has 1 field, one-to-many normalization is the best option? And if the connection this minimum will consist of 10 values? A table with 10 million rows? Absolutely unnecessary table selection on which will slow down with each new value in the main table.
Now the databases can easily work with sequences, and in terms of performance in all tests, this is much better than normalization. In Postgres, data arrays have been invented for a long time, in mysql find_in_set and a small user-defined function will calmly bend your normalization.
I am for option 3. Only here the sphinx is not able to join such fields. You need to process these numbers on the backend and substitute values there, or create a second column where there will already be id comparisons.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question