M
M
Michael2016-09-29 15:33:29
SQL
Michael, 2016-09-29 15:33:29

What are the repositories for logs and statistics?

Given:
A game server that many users connect to.
For each user, we write various statistical data about the use of the server (logs, statistics) in the database.
What now:
A large number of inserts and updates to the database are being performed, high load, brakes, etc.
What are the solutions to collect all this data and not torment the database? Special databases? Which?

Answer the question

In order to leave comments, you need to log in

5 answer(s)
A
Andrey Burov, 2016-09-29
@Sing303

elasticsearch

R
romy4, 2016-09-29
@romy4

Disable indexes. This is the main kind of brakes when inserting. To search, use a slave copy.
Depending on what you are looking for in the data. But maybe Mongo, Redis will do.

I
igruschkafox, 2016-09-29
@igruschkafox

and now I have the same garbage at work ....
let's use NO SQL .... and it started ....
Well, you write down 10 million rows per second,
how will you process them then?
Spring Style?
the way out is simple - write to small partitioned tables in which (small in time - for example, store data in them for no more than 2 weeks, there are options where it is partitioned for days)
then switch these tables to storage - store there for 2-3 months (approximately)
when all processing is already over (the data has become rarely in demand) transfer to long-term storage while bang all indexes, turn on page compression and store for six months (preferably in another database)
then back up this Archive database and remove it from the server
And rightly said above
, the more indexes, the more slow down insert operations and updates
, I also met a scheme where such databases of logs are stored exclusively on a separate server - and they go to analytical storages by logshipping or olveys It or back arom every two days
Do not worry - and Mongo you You can also score with inserts, and Updates are generally a weak spot there (data is copied to several tables at once - denormalization, and in order to update one record, it will be searched wherever it is .... I hope I explained it clearly. And look for a pancake with JSON! ): )

R
Roman Mirilaczvili, 2018-03-28
@2ord

ClickHouse is optimized for storing write/analytics speed.
There is also Druid.

B
bnytiki, 2016-11-20
@bnytiki

for influxdb statistics - just designed to quickly save a huge number of indicators. sic! for collecting indicators, but not for their long-term storage.
to transfer to another DBMS for storage.
for elasticsearch logs, you can drive logs there in a delayed mode.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question