Answer the question
In order to leave comments, you need to log in
Why does the sql query take a long time to complete?
Database: https://github.com/x88/i18nGeoNamesDB (1.3 GB; 2,551,294 rows)
On a dedicated server (Core i5 3.8GHz (4 cores) / 8GB RAM / 2x500GB SATA), LIKE query to search for a city by name it takes almost 3.7 seconds, is this normal for such a database? How to speed up? Will indexing help? Need to prompt when user enters a city, would PostgreSQL be faster for this purpose?
Answer the question
In order to leave comments, you need to log in
Full-text search is probably better (not exactly).
---
Indexes will help no doubt, but I think they are already set, without them it would be longer than 4 seconds. What generally indexes are set in tables?
1. If LIKE is executed with two wildcards on the sides (LIKE '%something%') - normal indexes don't work. If with a wildcard at the end, they work.
2. I don't know what MySQL has, PostgreSQL has a pg_trgm extension just for such a search (LIKE with wildcards) + gin/gist index. Just replied today . We now have such an index, the table has a little more than 10 million records. Works fast enough.
3. Default config immediately into the furnace. It is necessary that the DB uses RAM more aggressively.
If you need a very fast response, split the city names into letters, create an XML index, and fetch the city names into a temporary table, then complete the query with the city key. Postgres has so many extensions that I'm pretty sure you'll find all the answers in a few queries on Yandex.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question