P
P
Pavel Zaitsev2012-08-24 15:39:06
PHP
Pavel Zaitsev, 2012-08-24 15:39:06

What to use when caching MySQL queries in PHP

There is a need to make caching of executed queries to MySQL in PHP. But built-in caching in MySQL is not suitable for a number of reasons. One of the main ones is to flush the cache with any change in the table - this is critical, since the system being developed is multi-user. I have already thought about storing the results of heavy queries in files on the server and at the next execution of the query (with reference to the user of the system) to check if there has already been such a query, then take it from the file. But in this case, you will have to solve the problem of tracking changes in the table.
Can you advise some optimal caching of SQL queries from PHP? Or maybe there is some other interesting solution?

Thanks in advance for the replies!

Answer the question

In order to leave comments, you need to log in

10 answer(s)
S
skvot, 2012-08-24
@skvot

Read more about redis. I use it myself for caching. Maybe it will suit your project better than memcash.

V
Vampiro, 2012-08-25
@Vampiro

You have false information about the main reason that prevents you from using the normal option, but this is not the main thing. Make aggregate tables. Look at queries and build indexes. Fill them with data after transferring contracts to stable statuses. If you have extra RAM, it’s better to give it to a properly designed database than to feed it to a memcache that is useless in this case, for example.
Contracts end, periods close, this data becomes static and is well taken from aggregates. The cache option is good for frequent access to the same data. Do your users download the same agreement more often, or do they watch different ones in sequence? Surely the latter, and you will fill the RAM with contracts that the user no longer needs. I repeat - it is better to score this RAM with indexes.
But for the general development, to study everything that has already been said above is mega-useful.

E
edogs, 2012-08-24
@edogs

One of the main ones is to flush the cache with any change in the table - this is critical, since the system being developed is multi-user. I have already thought about storing the results of heavy queries in files on the server and at the next execution of the query (with reference to the user of the system) to check if there has already been such a query, then take it from the file. But in this case, you will have to solve the problem of tracking changes in the table.
— It turns out a little illogical. On the one hand, you are concerned about the cache reset with any table change, on the other hand, you say that changes in the table affect the cache and they need to be tracked.
One of the relatively easy options for the cache is the memory table in mysql, where either all the data that is often selected or the ID for selecting it from the main tables is stored. The advantage compared to the 3rd-party solution is that you can cache without running extra data through php and the network, just insert into select from - in most cases. This is first. And secondly, triggers for changes in the main table can be configured to update the caching one, more flexibly than just a stupid query cache.
And finally, if the database is relatively small, then innodb + large pool / cache size - then in general everything in memory will be constantly counted.

V
Vyacheslav Plisko, 2012-08-24
@AmdY

For caching in php, if the system is unallocated, then it is better to use accelerators like apc (apc_fetch, ....) or shared memory shmop.
Radishes and memcaches are needed if several servers should have access to the cache, and this is not the best option, since these are actually separate servers, albeit very fast, but with the appropriate letensi when accessing.

F
FanatPHP, 2012-08-26
@FanatPHP

You don't need any caching.
You need to optimize your queries.
“100 contracts and more” is a ridiculous figure. By this number, any samples should be calculated as quickly as possible, in hundredths of a second.
Even if it's stupid to select all one hundred contracts in the script and count them by hand.
Caching should only be applied after queries have been optimized.
And now you are trying to put a turbo on the car without removing it from the handbrake.
In general, the task, of course, is very indistinctly described.
If you have, for example, problems with the search, then you can screw the sphinx.
In any case, you must first deal with the causes, and then look for a solution.

E
Ents, 2012-08-24
@Ents

I advise you to use memcache
You just need to clearly understand how to invalidate the cache

E
enchikiben, 2012-08-24
@EnChikiben

Why is Memcache bad?

W
Wott, 2012-08-24
@Wott

We need to think about the correct caching scheme.
Your data does not change, but you want to rebuild the server response with the old data? maybe it's better to cache the request itself?
Next, you need to think about how long the cache lives and how to invalidate it. There are simple schemes like a short external cache (eg nginx) that offload the engine and the base. There are more complex ones - the engine itself creates a file cache and deletes / re-creates it when the content changes (the simplest one is to drop the results of requests into files that are in accordance with the requested URL and make rules for apache that will give files instead of starting the engine, if they there is. )
Using another database like memcached for caching queries in the database or even in the engine is an obvious architecture failure :) Only for the purpose of caching it is better to give memory to the io cache. Reasonable use begins with storing separate data, such as user sessions, which are mixed into the cached page with inclusions - such as separating long-lived data and short-lived sessions.

G
gro, 2012-08-24
@gro

how critical? How does the fact that the system is multi-user affect this?
to cache on requests? Can specific data be cached?
> But in this case it is necessary to solve a problem of tracking of changes in the table.
so mysql decides exactly this, and it is critical for you.

M
MealstroM, 2012-08-25
@MealstroM

If your cache is reset on any table update, you are using MyISAM and not InnoDB, right?
The cache in Muskul works like this: when a request is received, the database makes a hash of this request (the request itself + the data on the rows that will be involved) if there have been changes - if the hash matches - then through the cache, otherwise - do it clean. InnoDB allows you to do row-level checking as of the beginning of the transaction. Perhaps this will help. It is also worth considering the length of the cache and the overhead of maintaining it.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question