P
P
Pavel2015-11-26 10:48:32
Perl
Pavel, 2015-11-26 10:48:32

Why did MongoDB suddenly start slowing down when inserting data?

Good afternoon!
There is a script that processes documents and inserts the result of processing into MongoDB,
from some point (I still haven’t figured out the reason), inserting into the database has become terribly slow (up to 4-5 seconds! per operation). Preventive measures were taken :) - removed all indexes, added 1GB of memory to the server - the same result. For the purity of the experiment, I started writing to a new database (empty), while writing to one thread - the insertion speed was acceptable (less than 100ms), when I turned it on in two threads - the speed dropped to 130-200ms, and after a certain time again to 4000ms. At the same time, this script always worked in 1 thread, but it was necessary to speed up the processing of documents (which appear about 5 million per week).
Monga is configured like this:

net:
 bindIp: 127.0.0.1
 port: 27017
storage:
 dbPath: /var/lib/mongodb
 journal:
    enabled: true
 engine: wiredTiger
 wiredTiger:
  engineConfig:
   cacheSizeGB: 4
  collectionConfig:
   blockCompressor: snappy
systemLog:
   destination: file
   path: /var/log/mongodb/mongod.log
   logAppend: true

I come to the conclusion that just iron can not cope, but maybe I'm wrong.
Virtual server (vmware): 4 GB of memory (added up to 5 GB - but had no effect),
Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u4
# lscpu
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                4
On-line CPU(s) list:   0-3
Thread(s) per core:    1
Core(s) per socket:    2
Socket(s):             2
NUMA node(s):          1
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 26
Stepping:              5
CPU MHz:               2000.071
BogoMIPS:              4000.14
Hypervisor vendor:     VMware
Virtualization type:   full
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              4096K
NUMA node0 CPU(s):     0-3

How it is possible to treat this problem?
Thanks

Answer the question

In order to leave comments, you need to log in

2 answer(s)
L
lega, 2015-11-26
@lega

- Show what query (insert) you are running
- What indexes are there
- What explain() query shows
- db.stats(), + .stats() on your collection
- mongostat
If there are no conditions when inserting and the size of the document is small, then the speed at $5 vps with an ssd drive reaches 10k/sec, i.e. 0.0001 sec per operation on average. If you have similar requests, then monge does not provide resources, especially io.

P
Pavel, 2015-11-26
@PavelKuptsov

1. Request (there are several of them, but they are identical in logic)

$col_doc->update(
            { url => $url },
            {
                '$set' => {
                    url      => $url,
                    reg_date => $reg_date
                  }

            },
            { upsert => 1 }
);

and here's another collection:
$col_statistic->update(
            { ts => $start_time },
            {
                '$inc' => { $assist => 1 }
            },
            { upsert => 1 }
        );

2. Removed all indexes - only _id_ remained
3. explain without indexes shows
"winningPlan" : {
                        "stage" : "UPDATE",
                        "inputStage" : {
                                "stage" : "COLLSCAN",
                                "filter" : {
                                        "url" : {
                                                "$eq" : "document.local"
                                        }
                                },
                                "direction" : "forward"
                        }
                },
                "rejectedPlans" : [ ]

with index by url:
"winningPlan" : {
                        "stage" : "UPDATE",
                        "inputStage" : {
                                "stage" : "FETCH",
                                "inputStage" : {
                                        "stage" : "IXSCAN",
                                        "keyPattern" : {
                                                "url" : 1
                                        },
                                        "indexName" : "url_1",
                                        "isMultiKey" : false,
                                        "direction" : "forward",
                                        "indexBounds" : {
                                                "url" : [
                                                        "[\"document.local\", \"document.local\"]"
                                                ]
                                        }
                                }
                        }
                },
                "rejectedPlans" : [ ]

4.
On collection:
{
        "ns" : "prepared.documents",
        "count" : 422836,
        "size" : 91540732,
        "avgObjSize" : 216,
        "storageSize" : 24944640,
        "capped" : false,
        "wiredTiger" : {
                "metadata" : {
                        "formatVersion" : 1
                },
                "creationString" : "allocation_size=4KB,app_metadata=(formatVersion=1),
-- порезано тут ---
        "nindexes" : 2,
        "totalIndexSize" : 10375168,
        "indexSizes" : {
                "_id_" : 4202496,
                "url_1" : 6172672
        },
        "ok" : 1
}

This is already when I created an index by url
on the database
> db.stats()
{
        "db" : "documents",
        "collections" : 4,
        "objects" : 422843,
        "avgObjSize" : 216.49207625525312,
        "dataSize" : 91542159,
        "storageSize" : 25034752,
        "numExtents" : 0,
        "indexes" : 5,
        "indexSize" : 10444800,
        "ok" : 1
}

This is all the data from a half-empty database - on which the brakes also begin with intensive inserts
5. I'll post mongostat now.
insert query update delete getmore command % dirty % used flushes  vsize    res qr|qw ar|aw netIn netOut conn     time
    *0    34    732     *0       0     1|0     0.7    4.0       0 408.0M 234.0M   0|0   1|0  195k    95k   17 14:21:43
    *0    *0    260     *0       0     1|0     0.7    4.0       0 408.0M 234.0M   0|0   1|0   88k    34k   17 14:21:44
    *0    *0    230     *0       0     1|0     0.7    4.0       0 408.0M 234.0M   0|0   1|1   79k    31k   17 14:21:45
    *0    *0    332     *0       0     2|0     0.7    4.0       0 408.0M 234.0M   0|0   1|0  113k    39k   16 14:21:46
    *0     1    277     *0       0     3|0     0.8    4.0       0 408.0M 234.0M   0|0   1|0   91k    37k   17 14:21:47
    *0     7    488     *0       0     1|0     0.8    4.0       0 409.0M 235.0M   0|0   1|1  132k    65k   17 14:21:48
    *0    32    693     *0       0     1|0     0.8    4.0       0 409.0M 235.0M   0|0   1|0  183k    92k   17 14:21:49
    *0    14    508     *0       0     1|0     0.8    4.0       0 409.0M 235.0M   0|0   1|0  141k    67k   17 14:21:50
    *0    *0    243     *0       0     1|0     0.8    4.0       0 409.0M 235.0M   0|0   1|0   84k    32k   17 14:21:51
    *0    *0    247     *0       0     2|0     0.8    4.0       0 409.0M 235.0M   0|0   1|1   82k    33k   17 14:21:52
insert query update delete getmore command % dirty % used flushes  vsize    res qr|qw ar|aw netIn netOut conn     time
    *0    *0    187     *0       0     1|0     0.3    4.0       1 409.0M 235.0M   0|0   1|1   63k    28k   17 14:21:53

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question