N
N
Nikita2021-01-18 14:19:25
MySQL
Nikita, 2021-01-18 14:19:25

Why does Node JS kill the process when querying MySQL?

I am making a query to get data from the database. If I request a small amount of data, then everything works correctly. But there are ~7 million rows in the database, and I need to get everything.

const mysql = require('mysql')

// создаем подключение
    const connection = mysql.createConnection({/*тут данные для входа*/})

    // подключаемся к БД
    connection.connect()

    // выполняем запрос
    connection.query("SELECT partname FROM `main`", (error, results, fields) => {
        if (error) throw error
    })

    // убиваем подключение к БД
    connection.end()


The process ends with the message
Killed                                                                                                                                                     
npm ERR! code ELIFECYCLE
npm ERR! errno 137
. . .


What could be causing this and how to fix it?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Alexander Talalaev, 2021-01-18
@neuotq

Oh .. well, you give. Of course, your node is crashing due to the limitation of RAM.
Make requests in chunks (chunks), you can start with 1000.
(approximate semi-pseudo code)

const mysql = require('mysql')

// создаем подключение, 
// используем https://github.com/mysqljs/mysql#pooling-connections читать настройку!
    const pool = mysql.createPool({/*тут данные для входа*/});
// Количество чанков
    const chunkSize = 1000;
//Запрос на количество строк в таблице
    const poolTableSizeQuery= "SELECT count(*) as rowsCount FROM main";
//Начинаем с определения количество строк
   pool.query(poolTableSizeQuery , function (error, results, fields) {
      if (error) throw error;    
      if (result && result[0]) {
    //Количество строк
      const rowsCount = result[0]['rowsCount']; 
   // Количество чанков размером в chunkSize (округляем значение)
      const chunksCount = Math.ceil(totalRows/chunkSize);
  // Наш запрос тепреь с ЛИМИТОМ
      const myQuery = "SELECT partname FROM `main` LIMIT";
      for(var i = 0; i < chunksCount; i++) {                 
               //Сдвиг для текущего чанка 
               let chunkOffset = i*chunkSize;
               //Сдвиг для текущего чанка
                let chunkQuery = myQuery + offset + "," + chunkSize;
               //Выполнение твоего запроса, уже с лимитом и сдвигом
                pool.query(chunkQuery , function (error, results, fields) {
                     if (error) throw error; 
                    //Тут выводим куда-то или ещё что.

                });
            }
     }
  });

You can play with the size of the chunks.
The algorithm came out as follows:
1. Create a pool for caching the connection to the database
2. Determine the size of the chunk (based on RAM limits)
3. Find out the size of the table and calculate the number of steps
4. Make a request for a chunk with a shift
5. Display the result to the user
6. If the chunks are not ran out go to p 4
Of the minuses, the table does not lock, which means there is no guarantee of a consistent result if the data has been changed between queries. If it doesn't matter, then don't worry. If important. then you need to figure out how to lock the table for writing for the duration of the script,

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question