Answer the question
In order to leave comments, you need to log in
Design a GPS tracker database table
Good afternoon!
Please tell me how best to design a table for storing coordinates.
About 3,000 entries are expected per day from one target (~ 90,000 per month, ~ 1 mil. year).
There can be many goals, now everything is stored in one table of about 3 mil. records, and everything is very terribly slow (index by time and by target), VPS server with 1 GB. The challenge was to improve.
I only thought of storing the coordinates for each target in a separate table.
And I think it is possible to separate the date and time into different fields?
And tell me what is the size of the point type? I didn't find it anywhere.
Ask a database guru.
Thank you.
Answer the question
In order to leave comments, you need to log in
Maybe it makes sense to shove old / irrelevant data into the archive (into another table)?
everything is very slow
In addition to partitioning, I can advise from the category of 'bad advice' (but I personally did this for simpler table storage formats - myisam, it's easier to restore the database with them, but with innodb I'm not very aware of how easy it will be to raise the database if the index file is lost)…
Divide data and indexes into different files (or rather physical partitions, unless of course your hosting plan allows it). Indexes usually take up less space, but it is they who create the main brakes when writing (since they overwrite the order of log (n) with the amount of data (of course, everything is more complicated, a lot is read both from the table with data and from the table with indexes) and this record by the server databases is done synchronously .
In itself, placing database table indexes on a faster (same ssd) media significantly speeds up work, but as I understand it, this is not your case and I suggest cheating by reducing reliability, disabling data writing synchronization for the corresponding files. You can do this by placing these files on a ramdisk (tmpfs) or by formatting their partition to ext4 and mounting them with the -o noatime,async,data=writeback options.
But before enjoying a noticeable increase in database performance, I recommend that you first test and prepare scripts for raising the database after an abnormal shutdown of the server (data in tmpfs is lost forever, as well as some data for partitions mounted with data=writeback). Since indexes are not placed on unreliable storage, their loss is not critical (the only question is the time required for their complete rebuilding).
At a minimum, understand the size of the database, calculate the memory required for SQL and change the VPS configuration,
and then - complex tuning of the hardware and software parts of the server
Maybe I'm wrong and the results of the tests that I conducted are very subjective, BUT.
The data was logged and simple fetching from a MyISAM table was faster than Innodb.
And according to the "datetime" time format, I try to use timestamp everywhere. Maybe you should also try to test performance with it, maybe it will be faster?)
And one more thing “archive_date >= '2013-01-01' and archive_date <= '2013-02-01'”. So "archive_date between '2013-01-01' and '2013-02-01'" looks a bit kosher in my opinion. But I am not super special, of course, and therefore, perhaps, I am mistaken in something.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question