Answer the question
In order to leave comments, you need to log in
How to optimize the data structure of loaded tables?
There are two tables, let's call title and spec, they contain general information about the event and event specification, respectively. They are partitioned and have many fields (up to 40 each), indexes were created for some fields separately even before me.
Now we have switched completely to a system that uses the data of these tables, and now loading and querying from them should be fast. Twice a day, new information is loaded, and some events (updates and inserts) are recalculated during business hours.
From these tables, requests are often made for various reports and alerts.
Now I have developed something like an antifraud, and to fill the showcase (aggregated data from these tables), I first created six indexes, one covering TITLE, and the other according to WHERE conditions. Thus, INSERT for one day passed in 6 seconds. The next day it turned out that what used to be calculated according to these tables in half an hour is now considered to be five hours. Then I deleted these indexes and created an index for each bitmap local simply by trunc(START_DATE). My download per day is now 13 seconds, but the rest of the operations are still delayed, although already less.
Tell me how to make the indexes on these tables not really slow down the loading of data into them, but my data loading from them was at least up to a minute per day of data?
Maybe it's worth revising all packages and requests and deleting these 5 indexes into separate fields of the title.ID type? Anyway, in the query plan, I do not see that this index is involved somewhere, although they are connected using spec.title_id.
Can you suggest other architecture options? In which direction would it be more correct to move to optimize everything and everything?
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question