Answer the question
In order to leave comments, you need to log in
MongoDB. Why does increasing the size of the collection greatly increase the time it takes to insert new documents?
MongoDB 3.2.1 | wiredTiger Inserting
new documents.
There are no conditions or complex selection.
Collection statistics:
"count" : 149380522
"size" : 5582314593.0
"avgObjSize" : 37
"storageSize" : 12036964352.0
"capped" : false
"totalIndexSize" : 1537122304.0
mongod soft fsize unlimited
mongod hard fsize unlimited
mongod soft cpu unlimited
mongod hard cpu unlimited
mongod soft as unlimited
mongod hard as unlimited
mongod soft nofile 65536
mongod hard nofile 65536
mongod soft nproc 65536
mongod hard nproc 65536
insert query update delete getmore command % dirty % used flushes vsize res qr|qw ar|aw netIn netOut conn time
*0 *0 217 *0 0 139|0 2.0 29.4 0 2.6G 2.3G 0|0 0|1 136k 59k 6 2016-02-15T14:45:49+03:00
*0 *0 175 *0 0 109|0 0.7 29.4 1 2.6G 2.3G 0|0 0|1 108k 50k 6 2016-02-15T14:45:50+03:00
*0 *0 43 *0 0 32|0 0.5 29.4 0 2.6G 2.3G 0|0 0|1 29k 27k 6 2016-02-15T14:45:51+03:00
*0 *0 10 *0 0 7|0 0.6 29.4 0 2.6G 2.3G 0|0 0|1 6k 20k 6 2016-02-15T14:45:52+03:00
*0 *0 28 *0 0 15|0 0.6 29.4 0 2.6G 2.3G 0|0 0|1 14k 23k 6 2016-02-15T14:45:53+03:00
*0 *0 2 *0 0 1|0 0.6 29.4 0 2.6G 2.3G 0|0 0|1 425b 18k 6 2016-02-15T14:45:54+03:00
*0 *0 1 *0 0 2|0 0.6 29.4 0 2.6G 2.3G 0|0 0|1 368b 18k 6 2016-02-15T14:45:55+03:00
*0 *0 17 *0 0 14|0 0.6 29.5 0 2.6G 2.3G 0|0 0|1 11k 21k 6 2016-02-15T14:45:56+03:00
*0 *0 11 *0 0 10|0 0.6 29.5 0 2.6G 2.3G 0|0 0|1 8k 20k 6 2016-02-15T14:45:57+03:00
*0 *0 4 *0 0 2|0 0.6 29.5 0 2.6G 2.3G 0|0 0|1 2k 18k 6 2016-02-15T14:45:58+03:00
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2222 mongod 20 0 4200976 3,692g 6664 S 3,3 23,9 0:50.26 mongod
if _, err := c.UpsertId(bson.ObjectIdHex("xxxxxxx"), bson.M{
"$set": bson.M{"field": "string"},
}); err != nil {
panic(err)
}
Answer the question
In order to leave comments, you need to log in
Perhaps a sharp drop is due to the caching system in the OS, at the beginning the data is driven into ram, then it hits the limit and slowly merges to disk. New db - new file - separate cache block... - works fast in the beginning.
It would be interesting to check on a ram disk to exclude io brakes if there is a lot of memory.
Also, the insertion should be faster than the update (when the database is empty, it's just an insertion).
Because if the update size does not fit into the old document size, then it needs to be relocated.
"avgObjSize" : 37With such a volume of the document, the percentage of effective information decreases. For this task, you can try LevelDB or its prototypes, in theory it will be 10 times more economical and faster.
If you create a new collection and write to it, then everything works as it should.Disable waiting for the result, or write in parallel. On my laptop, I write about 10k per second to an empty collection.
Writes approximately 300 documents per second.
there are no miracles:
when the SSD is filled, the write speed decreases - trim the processor has to figure out where to write, so that not over the entered but not erased
Mongo also rebuilds the index with each insertion, even if it allocates it in batches / pages - anyway, you need to know which one is used, which one is not, and optimize the structure,
briefly on optimization - here
are
practical tips - here and here : nothing new - make the index smaller, i.e. several collections instead of one
can take comfort in the fact that Mongi's insert speed is much higher than RDBMS
how to overcome the problem? look at the queues, look at the Aerospike
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question