Answer the question
In order to leave comments, you need to log in
Choosing a large data subd?
Actually, the question arose of choosing a subd / optimization of the current one.
now the whole economy is spinning on mysql.
in 3 months, one tablet has grown to 7GB, (innodb) sampling takes a very long time.
it saves all clicks on ads. further volume will be more.
the current structure is: id, campaign, banner, user, date, utm_source, utm_campaign & etc.
when selecting by an arbitrary date (for example, a month) and grouping by the campaign, banner, user, date fields, the selection is made from 30 seconds or more.
index: date, campaign, banner, user
tried to partition everything on mong, the result is not much better. given that I did not transfer data about utm tags. about 6-10sec.
data must be stored for at least 3 months, ideally half a year.
the rest of the data is summarized and entered into the pivot table, but everything is simple and there are no questions.
Anyone with experience please share. Links to related articles are welcome.
vps server: ubuntu 15.10, 4core, 16gb memory, 1000gb ssd
Answer the question
In order to leave comments, you need to log in
did you set the indexes? explain requests did? Something doesn't seem to me.
For your task, mysql + tune is enough. In general, you can still store ready-made aggregations instead of making them for each sneeze.
use what you know
indexes, calculation in the background, storage of frequently requested in separate tables
in general, get comfortable with Postgres, of course, suitable for almost all tasks, but mastering takes time
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question