S
S
Sergei Iamskoi2016-08-01 10:30:40
PostgreSQL
Sergei Iamskoi, 2016-08-01 10:30:40

How to properly set up a database \ table to select a large number of data?

There is a PostgreSQL 9.5 DBMS, configured for the current project according to articles from the Internet. It was necessary to add a table to the database (a list of invalid passports from the FMS website) with one varchar (10) field and 100 million records.

SELECT * FROM expiredPassports WHER serial = '1234123456';

takes an average of 15 seconds. BIGINT was also set, the result is 1-2 seconds better.
What and how to set up correctly so that the select is correct and fits in hundredths or at least tenths of a second?
The table is mostly read-only and is scheduled to be updated once a week.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
sasha, 2016-08-01
@madmages

why do you need BIGINT? a regular int should be enough, make it a primary key, if the speed is still not enough, then make shards of 10 million.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question