Answer the question
In order to leave comments, you need to log in
How to improve DB performance when 1000 users are connected at the same time?
There is SQL Server. It has a 500 GB database. All this works on a metal server with 100GB of RAM and 2 Xeons.
The database is used by the web application. At the moments of active work with the application of 1000 or more users, there are problems with the responsiveness of the application. The profiler shows on a DB. The database actively uses several tables on which locks are constantly hanging. Tables with a size of several tens of millions of records. Insert and Update constantly go to tables. Reading dirty data (nolock) is configured from these tables in order to avoid unnecessary locks.
How can you optimize the work with the database to avoid problems with performance and locks?
Answer the question
In order to leave comments, you need to log in
in the general case, no way because business logic can be tied to locks - like 1C, for example.
in a particular case, you need to switch to versioned tables, they are specially made for breeding readers and writers.
https://technet.microsoft.com/en-us/library/ms1750...
Perhaps reducing the number of locks for transactions in the application will help. See transaction isolation levels at https://msdn.microsoft.com/en-us/library/ms709374%... .
In general, it sounds like a script for a good application and database refactoring. Reducing dependencies, etc. Here one could look towards CQRS , where different databases for commands and queries are possible.
A 500GB base is bad, bad when it doesn't fit in RAM. Maybe it should be split into several bases or cleaned?
I'm sure 99% of the data can be stored in a text file without a DB.
I can't figure out how dirty data and locks go together?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question