P
P
paz2015-07-14 07:57:32
SQL Server
paz, 2015-07-14 07:57:32

MS sql. Thousand tables or one big one?

Good afternoon, dear MS SQL experts.
There is a database where data on 2-3 thousand entities are written with great frequency. Everything is written in several tables. The volume of the database is more than 1TB, the growth is about 50-60 GB / month.
With base growth problems on speed at an insertion began. Also, since the database should be online 24x7 there were problems if it was necessary to re-index or diagnose the database (exclusive access was required)
There is an idea to make a separate data table for each entity. Will this give an increase in speed when inserting (an index per table less) or vice versa will force the system to keep more indexes in memory? also, it would give convenience when transferring data and distributing data across multiple databases. All business logic works essentially, selections go only by the entity key.
According to the use of the base: it is written constantly in almost every essence with an intensity of once every 1-5 seconds. It is read in different ways, depending on user activity, mostly infrequently, but the selection should be fast in order to get 20-30 thousand lines in a couple of seconds. Storage on ssd.

Answer the question

In order to leave comments, you need to log in

3 answer(s)
N
nApoBo3, 2015-07-14
@nApoBo3

What are the problems with insertion speed?
Multiple tables are supposed to be faster for a number of reasons, but the number of joins per sample can increase, which will affect performance. Plus, you may need additional checks for values ​​that were provided by unique keys, which will nullify all the pluses.
In any case, this will require a revision of the architecture of the base and application, and it is not a fact that the output will be faster.
Perhaps it makes sense to separate tables not by entities, but by some other criteria, without knowing the structure of the database it is difficult to say.
Or maybe you just need to divide the objects into even-odd and scatter them over two servers.

L
lega, 2015-07-14
@lega

Different tables should be better, you can also try partitioning and compression.

get 20-30 thousand lines in a couple of seconds
If your data is selected in batches, for example by days, then you can store them by days while compressing them (if there is text, then you can compress it by 10-30 times), so the selection will be faster (io saving), the space occupied and the index will be much less.

S
sim3x, 2015-07-14
@sim3x

Data from sensors is simply poured into the system. users take reports for a time interval - the data is simply loaded and the math is calculated on the client. In general, you need to write quickly, quickly give one key at a time - time. no tricked-out selections are needed.

wrong architecture - no need to write raw logs to the database
for each sensor has its own table
how to backup
no way - raise replication
how to highlight a part of the data that is most often used and should work faster than archived ones
determine the period after which the data is sent to the archive
break it into periods (for a month or a week) leave
it as static files

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question