S
S
sendnikkei2018-08-31 09:50:29
SQL
sendnikkei, 2018-08-31 09:50:29

How to reduce request processing time?

Good time dear!
Please tell me what could be the problem and where to dig.
1. There is a table with 400,000 rows
2. There is such a query:

SELECT t.[id],t.[is_c],t.[mobile_p],t.[external_u],t.[spots],t.[state_id],t.[state_date],t.[max_state] 
FROM [c] AS t WITH (NOLOCK) 
WHERE t.external_uids.exist( 'external_uids/external_uid/uid[.=(.......) = 1 
ORDER BY [id] OFFSET 0 ROWS FETCH NEXT 100 ROWS ONLY

There is a requirement: the request should be processed in 20 seconds. If the FETCH NEXT value is less than 100 (99.98), the request takes 20 seconds. If the value is from 100 - the request is processed from 1 minute 40 seconds.
There are indexes, everything is ok with them. Other queries (for example, the usual SELECT * - are completed in 10-15 seconds).
What could be the problem?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
A
Anton fon Faust, 2018-08-31
@sendnikkei

I have vague doubts that the problem is somewhere here:
WHERE t.external_uids.exist( 'external_uids/external_uid/uid[.=(.......) = 1

A
ApeCoder, 2018-08-31
@ApeCoder

t.external_uids.exist( 'external_uids/external_uid/uid[.=(.......) = 1 looks like xpath - instead of using hash join bk or something, there is probably an expensive per-record lookup. table and use it
Please provide a query plan.

R
res2001, 2018-08-31
@res2001

  • Look at the execution plan.
  • Is there an index on t.external_uids and on t.id?
  • Replace exist with in.
  • 400000 is not the amount that MS SQL could squat on it. Maybe you don't have enough memory for the SQL server and/or the disks are slow?

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question