Answer the question
In order to leave comments, you need to log in
How to organize data transfer from a network interface to a C++ application as quickly as possible?
Now the data is poured via ODBC into the database, and from there the application reads them. Surely this is not the fastest way to deliver data from the network interface to the application.
How acceptable is such a data delivery scheme: from the network interface to the application, and only then, during idle time, the application adds the data to the database? There is only one user of this data and the database - this is the application.
How many milliseconds (interesting order estimate) can be gained by this approach?
How widespread is this approach, and does it have a name?
Answer the question
In order to leave comments, you need to log in
I claim that this is the fastest way, but this is only a hypothesis. Only a profiler can give a firm answer. And yes! Do not pre-optimize (c)
And who pours data on odbc into the database? If your service, and you can change it, then there are options for faster interaction. For example, you can use shared memory (shared memory) or pipes (pipes) for interprocess communication.
Shared memory, IMHO, will be the fastest way to interact.
Redis is just a database that can swap out to disk during idle time. And it seems to support shared memory.
The question is posed incorrectly.
To optimize odbc - you need to know 100% what exactly it is the bottleneck. I would start by analyzing the execution and extracting the data locally on the server. DBA will help here. And if your task is related to ETL - read about the techniques. Maybe exporting to a csv file with loading into a c++ application would be faster. Maybe database replication or migration.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question