Answer the question
In order to leave comments, you need to log in
Inserting 10,000 records in one transaction?
There is an application that generates a certain data structure and, upon completion, puts it into the database.
Recording to the database occurs in one transaction.
The transaction contains about 10,000 INSERT requests, the data takes just over 2MB.
Actually the question is: is this a completely normal transaction or should it not be used here? Maybe it's better to write data to the database as it is generated, and in case of an error, delete everything that I managed to insert?
UPD : On a specific server, such a transaction is successfully completed. The essence of the issue boils down to the nuances that must be taken into account when executing a request of such a volume.
UPD : InnoDB tables
Answer the question
In order to leave comments, you need to log in
Agent_J I reiterate
- 2MB in a transaction is not scary if it is single at one point in time, if you have 100 DML commands at the same time, then you need to “tune” InnoDB.
Before version 5.6.8, the size of the transaction log was 10Mb in total, in older versions it was 100Mb ( Documentation ).
Here are three variables that affect the log files, only change values when the server is stopped and when the previous log files are deleted (practice on cats first):
innodb_log_files_in_group:
Число журналов в группе файла регистрации.. InnoDB обеспечивает ротацию журналов. Рекомендуется использовать значение 3.
innodb_log_file_size:
Размер каждого журнала в файле регистрации групп в мегабайтах. Разумные значения располагаются от 1M до размера буферного пула, определенного ниже. Чем больше значение, тем меньше будет число контрольных точек сброса данных в буферном пуле, что уменьшит медленный дисковый ввод-вывод. Но большие журналы также означают, что восстановление будет медленнее в случае аварийного отказа. Ограничение размера файла такое же, что и у файла данных.
innodb_log_buffer_size:
Размер буфера, который InnoDB использует, чтобы писать файлы регистрации на диск. Разумные значения располагаются от 1M до половины объединенного размера журналов. Большой буфер файлов регистрации позволяет большим транзакциям выполняться без необходимости писать файл регистрации на диск, пока транзакция не закончится. Таким образом, если Вы имеете большие транзакции, увеличение буфера файла регистрации уменьшит медленный дисковый ввод-вывод.
bdmalex
INSERT DELAYED only works with MyISAM, MEMORY, ARCHIVE. In version 5.6, this is already depricated, in version 5.7 it is excluded. The application was found for write-back from multiple clients, not just one. The minus of the instruction is the lack of a write guarantee and the increased consumption of server resources. for each pending entry, a separate stream was opened, which was closed when several streams were combined into a single entry in the table. In fact, somewhat delayed were transformed into a sequential list of ordinary inserts (this is speaking figuratively).
Agent_J
1) 2MB in the request for innodb out of the box is quite an acceptable size.
2) For better performance you can use one INSERT with multiple VALUES:
INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);
Sending query to server: (2)
Parsing query: (2)
Inserting row: (1 × size of row)
Inserting indexes: (1 × number of indexes)
A transaction is 1 atomic unit, that is, either you have 10k INSERT queries executed or not. For this, a transaction was created.
YourChief
why not expand bulk_insert_buffer_size and insert with a single insert?
The main thing is that the size of the log is enough. Because if the transaction log is full, everything will crash and you will not be able to save these 10,000 records.
MySQL seems to have an "INSERT DELAYED ..." syntax, can't that be used for your task?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question