Answer the question
In order to leave comments, you need to log in
What is the least expensive way to process 5-6M very small requests per day?
I have a web service that implements an API. This API (REST) receives very, very short messages of the form:
{
client_key: "my_secret_key",
ip: "10.128.225.66",
user: "testuser",
...
}
Etc. In total, there are 10+-2 very short fields in the message. I have a very fast sharded (MongoDB) storage (it has grown a lot over time). But I ran into the performance of the server (Apache) to respond to so many very small-atomic requests. Requests are not evenly distributed, but concentrated mainly in a group of peaks. How to get out without updating hardware and installing loadbalancer? Maybe try something else instead of Apache or take some kind of mod for it? Apache is just out of the box in Ubuntu Server 12.04.1 LTS.
Answer the question
In order to leave comments, you need to log in
you can put tomcat, implement api in java ... a simple servlet on my laptop processes ~ 12k requests per second ... in this case, everything will most likely run into the speed of writing mongas ...
Maybe twist the settings of Apache workers? Under your load, it may unnecessarily often restart the "workers". And you, as I understand it, it is configured "out of the box".
Well, after all, do you have APC or an equivalent cacher?
YAWS/nginx + php-fastCGI, in this case you additionally benefit from the ability to keep a permanent connection to MongoDB.
There are such requests, the answers to which can be cached, then you cling ramfs, inside the caching folder, tie nginx to this folder to cache responses to certain requests.
nginx instead of apache, put
the data from these queries of yours, should you just save it in MongoDB, or do you need some other processing?
Dump Apache, install nginx. If requests go through php - php-fpm and go.
Are there similar requests? nginx and fastcgi_cache (although I'm not sure if it's necessary)
Do not need an extra layer? The most desperate option:
openResty (nginx seasoned with lua and some kind of mother) + mongol
And we have a scheme - nginx -> mongo without any overhead. Everything is cool, great, asynchronous.
Can't stand mongo? We raise another instance, register it in nginx upstream - rejoice!
You can wind up anything and any way, it is important to understand how it will work and where the bottleneck is
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question