W
W
wonderatom2015-01-11 22:14:04
PHP
wonderatom, 2015-01-11 22:14:04

How to properly optimize the work of php and mysql with two databases?

I am writing a billing system for a small Internet provider. Due to certain reasons, processing of subscribers requires data from two databases at once.
I did like this:

  • I make a request to the first base, get 16000 rows and write them to an array:
    array(16000) {
      '0689182' =>
      array(19) {
        'id' =>
        string(5) "25491"
        'agrm_id' =>
        ...
        'vg_id' =>
        string(8) "1323"
        'rec' =>
        string(1) "0"
      }
    ...
    }

  • I go through the entire array and write unique id to a string, so that later I can make an exact selection from the second database;
    foreach($basesData as $user) {
                $strAgrmId .= $user['agrm_id'] . ',';
            }

  • I make a request to the second base, and I get 16000 rows with additional information:
    $sql = "SELECT
                    number,
                    balance,
                    credit
                FROM
                  agreements
                WHERE agrm_id IN(" . substr($strAgrmId, 0, -1) . ")";

    array(16000) {
      '0689182' =>
      array(19) {
        'balance' =>
        string(5) "25491"
        'credit' => 
        string(1) "0"
      }
    ...
    }

  • I unite;
    foreach($basesData as $key => $user) {
                $users[$key] = array_merge($basesData[$key], $agrmData[$key]);
            }


Further along the algorithm, I do a similar manipulation with the selection of information to determine the status and again iterate over the entire array of users:
foreach($users as $key => $user) {

            ..code..

            foreach($arrayPool as $one) {
                        ..code..
                        $users[$key]['status'] = $one['status'];
            }
        }

Nested foreach works with arrays of length 20-30. But as a result, it takes a lot of time to process and collect all the necessary information into one array. On my PC, the script does not complete in 15 minutes.
From here the first question : how to properly organize optimization in such a case?
And second : I have a suspicion that this algorithm is complete nonsense. And I think that it is possible to assemble the same information but using a different algorithm.
  • I get the first array with 16000 entries;
  • I start iterating through the array and at the same time for each of the users I make a separate request to search for additional information, then I make another request to search and process the status;
  • I add the information found for each individual user and continue the algorithm.

Thanks in advance!

Answer the question

In order to leave comments, you need to log in

2 answer(s)
I
IceJOKER, 2015-01-11
@IceJOKER

You can also access two databases in one query if the user has access - dbName.dbTable.dbField
stackoverflow.com/questions/4076562/query-with-joi...

M
Melkij, 2015-01-11
@melkij

I don’t know your level of knowledge, so just in case I’ll ask again: exactly two different databases or just tables? These two things are sometimes confused.
Profile where you are actually wasting your time. The current zend engine has a rather strange memory management, which will be thoroughly optimized in php7, but merging only 16 thousand records is a garbage question as long as there is enough memory.
You can somehow twist the processing logic upside down - but for this you need to understand what needs to be done in the end (at the same time, it will be lazy to read long essays).

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question