P
P
prog3232014-04-21 10:27:48
C++ / C#
prog323, 2014-04-21 10:27:48

How to make 200 update c#/mssql even faster (c#/mssql) ?

200 update in a database of several million records

BEGIN TRANSACTION;
UPDATE T SET f1 = 1, f2 = 1, f3 = 1, f4 = 1 WHERE s='test1';
UPDATE T SET f1 = 1, f2 = 1, f3 = 1, f4 = 1 WHERE s='test2';
UPDATE T SET f1 = 1, f2 = 1, f3 = 1, f4 = 1 WHERE s='test3';
.....
UPDATE T SET f1 = 1, f2 = 1, f3 = 1, f4 = 1 WHERE s='test200';
COMMIT TRANSACTION

on ssd I write one after another for a minute - can it be faster? Or change the base to write faster?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
Sumor, 2014-04-22
@Sumor

In order to evaluate where you can improve, you first need to look at the update execution plans and times in the SQL server profiler.
1. Each index takes time to update itself. Perhaps you can reduce the number of indexes. Sometimes it is more profitable to delete them and start again after the update operations. The same applies to calculated fields whose values ​​are stored in a table. Converting a table key to a clustered index can improve the lookup and update times of records.
2. Triggers. They can also be disabled before the package is executed and enabled after the package is executed.
3. Reduce the number of Update operations by combining them according to some criteria. Reduce the number of updates in one transaction.
4. If the database structure allows, then the delete / insert link will probably work faster. In this case, you can enable the Bulk Insert mode, which writes data directly to the tables, bypassing the transaction log.
5. Recovery model - simple, full or bulk logged. Perhaps in your case, the full model will give a gain in updating the data, but you may lose in hard disk space and database save time.
6. Physical location of containers. It is possible to physically separate tables, indexes and a broad gull on three different disks - then temporary indicators can improve.
7. Using partitioned tables and indexes. Perhaps if the update occurs only in a certain part of a huge table, for example, only in the records of the last year, then it makes sense to break it into sections.
In any case, each optimization problem needs to be investigated individually. You need to try to exclude each factor and measure the time without it. For example, temporarily drop all indexes on a table and run the upgrade package.

A
Alexander, 2014-04-21
@SashaSkot

UPDATE T SET f1 = 1, f2 = 1, f3 = 1, f4 = 1 WHERE s IN ('test1','test2','test3' )

V
VenSaitto, 2014-04-21
@VenSaitto

UPDATE T SET f1 = 1, f2 = 1, f3 = 1, f4 = 1 WHERE s LIKE 'test[0-9]' OR s like 'test[1-9][0-9]' OR s LIKE 'test[1][0-9][0-9]' OR s = 'test200'

or S select subquery
Check indexes, increasing the number of indexes slows down the operations of adding, updating, deleting table rows, since the indexes themselves have to be updated.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question