Answer the question
In order to leave comments, you need to log in
How to process and store a huge number of tracking requests?
There is a JS script similar to Ya.Metrika and Google Analytics that clients install on their sites. The script sends data about each visitor to my server.
How, from an architectural point of view, to organize the processing and storage of a potential huge data stream, for example, using Azure tools?
Things are of interest, such as the API request handler itself, where the data will directly arrive from the JS tracker, and the storage where the API will add this data.
The data will be retrieved by a dashboard-like web application, so quick data retrieval and sorting is relevant.
Ideas that come to mind:
- Have a load-balancer that will shove processing requests into the application API in the DCs closest to the client (region)
- Have distributed storage (look towards CosmosDB)
Answer the question
In order to leave comments, you need to log in
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question