D
D
Danil Samodurov2021-02-04 20:27:51
Python
Danil Samodurov, 2021-02-04 20:27:51

Why is reading a SQLite database while writing to it slow?

Hello. Explain something to the trainee. Recently I asked what is the fastest way to write to a file and read from a file ( What is the best way to read and write to a file, as well as the file type itself, in terms of speed when using Python for big data processing? ). As I understand "no need to reinvent the wheel and use a DBMS". Ok, I did just that. Instead of creating multiple .csv files, I created one database with many tables.
written to the database with the following script:

import sqlite3
import pandas as pd
from datetime import datetime

from get_difference import get_change
from get_public_data import ticker


def make_ticker_db():
    while True:
        with sqlite3.connect('ticker.db')  as conn:
            request_1 = ticker()
            # time.sleep(1)
            try:
                request_2 = ticker()
                timestamp = datetime.now().strftime("%d:%m:%Y %H:%M:%S")
            except Exception as error:
                print(error)
                continue

            if request_2 == request_1:
                request_1 = request_2
            else:
                change_dict = get_change(request_1, request_2, timestamp)
                for i in change_dict:
                    pd.DataFrame(change_dict[i]).to_sql(i, conn, if_exists='append', index=False)
                    conn.commit()


if __name__ == "__main__":
    make_ticker_db()

As you can see, data is added to the database constantly. But, as it turned out, if at this time you try to read some kind of table, then the reading process does not shine with speed. Read, as well as wrote, through pandas:
with sqlite3.connect('ticker.db')  as conn:
    print(pd.read_sql("SELECT * from table_name", conn))

DBeaver refused to open the database file at all while something was being written to it.
Question: What am I doing wrong. And how should it be done?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
Sergey Gornostaev, 2021-02-04
@samodurOFF

Because SQLite, as its name implies, is a toy DBMS, not designed for concurrent access.

R
rPman, 2021-02-04
@rPman

Partition data and structures, it may look strange at first, but it can help. Those. instead of creating one sqlite file, make several of them, thinking about how to divide the data so that writing and reading rarely intersect.
Sometimes it’s difficult to give up the convenience of simplicity and the lack of a need for a daemon
Yes, yes, it’s not necessary to use attach database, and yes, you will have to monitor locks and transactions that are no longer atomic, and so on ... you just need to understand where you can neglect this and where to stop and still choose a ready-made powerful tool like mysql/postgres/oracle...

E
Eugene TI, 2021-02-05
@justhabrauser

SQLite used to lock the database at the _base_ level. At least when writing (possibly when reading, it's lazy to read the docks).
How it is now and how to manage it - read the documentation .

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question