Answer the question
In order to leave comments, you need to log in
What is the best way to trigger an alert when there is too much event flow?
Given: Collection of logs from nginx via syslog with sending to another server via udp, logstash + elasticsearch stack, to each site (there are about 500 of them) according to its own nginx format indice-
Task: When the specified threshold of events per second is exceeded, to a separate site (indice) perform some action (conditional HTTP GET)
For context: I'm trying to make notifications about DDoS attacks on client sites
Considered options:
1) Use a native watcher in elasticsearch, but, as I understand it, you have to do 1 query for each indice in order to determine the number of events for the last conditional 10s. It seems to me that this will create a large load + will have a response delay.
2) Write your own service and organize the collection of events directly from logstash, then use the leaky bucket algorithm to count the limit exceeded and call your event
Tell me, maybe there are simpler and more productive ways to do this? Or what pitfalls 2 methods can have, since now it seems to me the most optimal.
Answer the question
In order to leave comments, you need to log in
I would go the first way. Or even used metrics (megabytes/sec), not logs.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question