Z
Z
ZzZero2016-01-26 21:04:59
PostgreSQL
ZzZero, 2016-01-26 21:04:59

What are the options for optimizing/improving Postgre performance?

The application has over two hundred tables, of which 15 tables are used constantly, the main tables of the application. There are tables with a width of 150 columns.
All tables correspond to normal forms, most of the SELECT queries are covered by indexes (including composite ones).
Faced with a twofold drop in database performance.
There are tables that are updated daily by about 10 thousand lines (150 columns wide).
There are no options for cheap and "medium" optimization in the application, the number of requests and their complexity are kept to a minimum, a lot of things are cached.
How do you administer a database? Does anyone have any ideas without increasing the computational power?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
L
lega, 2016-01-26
@lega

I would log the execution time of all requests, make the top of the slowest ones and start parsing each request from the top.

All tables correspond to normal forms, most of the SELECT queries are covered by indexes (including composite ones).
A large number of joins reduce performance, nested selects can kill performance. they don't have indexes. Also, developers often set incorrect indexes, which does not give maximum performance - it is not enough just to "set" indexes on the required fields.
But this is all for the developer, the admin probably needs to keep track of what would be enough memory, but it didn’t rest on cpu / io (well, you can read articles on tuning)

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question