Answer the question
In order to leave comments, you need to log in
What is the best way to group data for database output?
There is a table with more than 1 million records, every day +100k approximately.
Each entry is a unique action record of a unique host.
I have an idea. Once a day, run a script that will transfer (calculate, form) data from a table with 1 million records to a new one, where the number of rows per day will be reduced to 3000+ (per user).
What is the best way to implement this so as not to kill the database.
P.S. But in this way, the information will no longer be updated in real time, which is a big minus.
Answer the question
In order to leave comments, you need to log in
In the sense that maybe they are written to some kind of global array, and I can try to get them from there?Not excluded. First of all, you should pay attention to the arrays
$_GET
and $_POST
.
We go read about REST, then we read about Swagger, then we go and write documentation for these wild people, and after that let them figure out what they send you. If everything works for you, then it’s not your headache that they sent you some garbage
You have an error.
You didn't write the API, you're just trying to use it.
they transmit as they transmit, this is clumsy code on your side.
I already found the problem, their data is transmitted in ASCII encoding, my json_decode immediately processes the data, and it only works with data in utfka, and interrupts the work without exception. Now the problem is that the file_get_contents("php://input") encoding is ALREADY in utf 8, while all Russian text becomes O75
Partitioning or sharding should help. Roughly speaking, the task is to make separate tables for different hosts or actions and, depending on the value of the selected parameter for separation, add an entry to the corresponding table. Partitioning is built into MySQL, but has its limitations. And sharding is done at the script level. The advantage of the latter is that you can write not only to different tables, but also to different databases, even on different servers (under high load, you can ensure scaling).
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question