Answer the question
In order to leave comments, you need to log in
Answer the question
In order to leave comments, you need to log in
I'm using collectd. Fast, simple, undemanding to resources., Reliable. Graphs are generated based on rrd, there is a simple web application in the kit that makes graphs and does cron.
If there are a lot of servers, like we have, 50+, then we put influxdb on them (before that we used graphite) and the grafana frontend. There is already collection and processing in real time. If the data needs to be further thinned out / processed, then we add another statd.
Total for large tasks: collectd (per host) - statd (one or more) - influxdb (one or more) - grafana (one or more).
For a single collectd server - web-collectd.
Nagios, Linux Dash, New Relic. Well, Uptime Robot.
And here are a couple of pages of tips from around the world.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question