X
X
xxx44yyy2019-03-08 08:02:34
Database
xxx44yyy, 2019-03-08 08:02:34

What is faster: to check if there is a record in the database or to make a restriction on uniqueness at the database level?

I get a cloud of data containing 3 fields: a, b , c . The connection of the field a and b is unique. Those. within a table - there cannot be another entry with the same a and b .
And now the question arose, how to check that the data already exists? Here I received a pack of 10 thousand lines in JSON. I want to put it in the database, but doing check exist every time for each record (i.e. sending 10 thousand requests) is somehow not good.
The second option is to put a limit in the database. Like this in Postgres with two columns.
Formally, from the side of the code, I think it would be better if there is a restriction in the database, because it will be possible to simply ignore the exceptions associated with conflicts. Those. to do adding and not to look, it turned out to add or not.
What do you think?
Sample data:
col_a | col_b | col_c
-------+--------+-------
2010-01-01 | 1 | 1.1
2010-01-02 | 2 | 342.2
2010-01-03 | 3 | 231.2
2010-01-04 | 4 | 1.6312
2010-01-05 | 5 | 0.943

Answer the question

In order to leave comments, you need to log in

1 answer(s)
D
DevMan, 2019-03-08
@xxx44yyy

kagbe unique index for this and invented.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question