K
K
Konstantin2020-08-25 19:23:16
C++ / C#
Konstantin, 2020-08-25 19:23:16

What technologies and approaches to use?

Good day ))
I work as a junior (C#/.NET) and set the following task: It is
necessary to implement a system for monitoring the performance of computers on the network.
That is, you need to see real-time statistics on CPU load / RAM / disk load, etc., on computers in the local network. If any of the computers suddenly have some problems, then report it.

Only one thing comes to my mind and from experience: a service is running on the end computers that writes to the database, and the admin application (WPF) simply reads the database every 1-5 seconds and displays statistics. The disadvantages that I see: the database fills up very quickly and this is a super large database for such a simple task.

What would you use to solve this problem? Thank you .

Answer the question

In order to leave comments, you need to log in

4 answer(s)
R
Roman Mirilaczvili, 2020-08-26
@nicebmw9

The disadvantages that I see: the database fills up very quickly and this is a super large database for such a simple task.

You should google solutions for
Round Robin time series
Such databases have a fixed size.
For example, RRDtool has a .NET binding in the bindings folder.
There are options like this:
  1. Each end device collects and stores statistics locally and issues it at the request of the admin application (WPF). The last displays. The advantage is that the monitoring application can be launched from any computer from a network folder, having a list of computers in the local network in the settings.
  2. Each end device collects and returns statistics without storing it locally. This is a centralized approach to data collection, which implies constant monitoring of stations from a central machine with data stored on it

If a more mature solution is needed, then better as Ivan Shumov pointed out .

V
Vladimir Korotenko, 2020-08-25
@firedragon

Steal from the naginsa. In fact, a bunch of options, the server can interrogate through wmi, maybe after 10 more technologies. Or put services on the station and write metrics and read them periodically

L
Lapish72, 2020-08-25
@Lapish72

As you noticed with your approach, for example, 10 computers with a polling timing of 1s will send at least 36k requests to the database in 1 hour. 1 day 864k lines+ This is if you need to store super detailed statistics.
How I would do it:
The data is not sent directly to the database, but to some intermediate service, which, for example, will analyze the data for 1 hour or 24 hours, perform manipulations with them (delete most of the data, and transfer the rest to another table for less detailed reports). Of the same +/- 3600 lines from one computer, you can leave 4 on average for 15 minutes. Saving resources by 900 times.
UPD:
If we set the "cleaning" period of the database to 2h, then in the application we can show the load both per second, because the complete data has not yet been deleted, and in more detail for data> 1h.

I
Ivan Shumov, 2020-08-25
@inoise

That is, munin is no longer fashionable?) Or nagios or zabbix. munin-monitoring.org

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question