V
V
Vladimir Merk2015-05-13 07:40:57
PHP
Vladimir Merk, 2015-05-13 07:40:57

How to properly build interaction between php on one server and mysql on another?

Hello.
The task is to quickly and in large volume select data from a database on a remote server.
The queries are very heavy, with a lot of conditions.
If you connect to a remote database and select data directly, then a situation occurs when the connections hang for several minutes, clogging all available connections and the throughput is very low.
To increase the throughput, I made a "data map" - a cron, every few minutes on a remote server all ids corresponding to global conditions are selected and written to a file.
On the target server, several ids are selected from this map and the remote database is accessed, but with a simplified query, by the primary key id and local conditions for a specific connection. In such a situation, due to the primary key and local conditions, there is often no suitable data and it is necessary to make queries by recursion in several iterations.
Selecting data takes a little longer, but it doesn't clog connections that much.
Despite these measures, and the primary key in the request, a situation often occurs when remote connections simply hang for several minutes with the Sleep status and, as in the first case, clog all available connections (although everything is better than in the first option). This is despite the fact that the connection to the remote database occurs right before the request, and the connection is closed immediately after. It is not clear why the connections are not closed. There is a feeling that remote requests are running slower on their own, tk. if you do the same on the server where the database is stored, there is no such problem.
Perhaps there are special settings in mysql to optimize remote connections?
What is the best/correct way to organize this logic to increase speed and throughput?
DB Server: Percona 5.5

Answer the question

In order to leave comments, you need to log in

5 answer(s)
A
Andrey Mokhov, 2015-05-13
@mokhovcom

set up data replication and use local storage

D
Dmitry Entelis, 2015-05-13
@DmitriyEntelis

An interesting problem.
In general, the query execution time does not depend on whether it is local or remote.
There may be a micro loss of time in the transmission of the request itself (if it is really long) and a noticeable loss of time in the transmission of the response (again, if it is huge).
According to what you write:
Requests are really executed several minutes? How much data is sent in the response? What is the channel between the servers, is it clogged? Memory/CPU consumption on SQL server at the time of query execution?
There is a stupid idea to increase max_connections, but I doubt that this will help in your case, most likely this is not the problem.
You can also try to use a persistent connection, there will be no overhead for permanent connections / disconnections.
Perhaps we need to look in the direction of optimizing the queries themselves, splitting them into parts, etc.

M
myfirepukan, 2015-05-13
@myfirepukan

What is the best/correct way to organize this logic to increase speed and throughput?

Put 2 servers in one data center in one rack.
PS I tried to do this on remote servers, the speed is prohibitively low

T
theded, 2015-05-13
@theded

it looks like the problem is in the requests, and not in the channel between the client and the server ((
but most likely the problem is that the puff falls in time or memory, not having time to process the result of the request and close the connection

P
Puma Thailand, 2015-05-13
@opium

Well, increase the number of connections a hundred times and don’t worry, you invented some kind of far-fetched problem.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question