Answer the question
In order to leave comments, you need to log in
How to monitor multiple parameters in Zabbix through one script call?
I am studying the possibilities of "bootstrap" of monitoring Zabbix and I can not understand how the arbitrary data prototyping mechanism works. Everything that I present here concerns Zabbix version 4.0
Let's say we have a k8s cluster, which has an unknown number of nodes. Two or three, usually. And we want to set up monitoring through Zabbix on the use of resources, processor and RAM, on these nodes.
Since their exact number is unknown (it can change), we use a low-level detection mechanism.
Writing a simple script
#!/bin/bash
echo '{ "data":['
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq -r '.items[].metadata' | sed 's/"name"/"{#NODENAME}"/g' | sed 's/^}/}\,/g' | sed '$ d'
echo '}] }'
#!/bin/bash
kubectl top node --no-headers > /tmp/res.txt
nodes_list=$(awk '{printf "%s\n", $1}' /tmp/res.txt)
nodes_cpu=$(awk '{printf "%i\n", $3}' /tmp/res.txt)
nodes_mem=$(awk '{printf "%i\n", $5}' /tmp/res.txt)
j=1
for i in $nodes_list
do
/usr/bin/echo $i" cpu" $(echo $nodes_cpu | awk "{print $`echo $j`}")
/usr/bin/echo $i" ram" $(echo $nodes_mem | awk "{print $`echo $j`}")
j=$((j+1))
done
Answer the question
In order to leave comments, you need to log in
What to write in the key field - pofigchto[{#NODENAME}] like this, I try to write instead of pofigchto what the meaning is in this item. In the display name, you also need to stick this {#NODENAME} somewhere, so that when the trigger works, it's immediately clear which of the items got into the error.
By the way, I advise you to collect json right away, the fact is that if json arrived in text, zabbix 4.0+ can create a dependent item from one of the fields of this json, including prototypes, using data preprocessing, and now they will already have data, according to which graphs are built, etc.
1 filter is needed only if you need to work not with all nodes, but only with a part according to some criterion.
2 about preprocessing and macro - see how the template "Template Module Linux block devices by Zabbix agent" was made (I already have the fifth zabbix, but it was the same in the fourth one) - the contents of the /proc/diskstats file are taken there and preprocessed is filled into an array, after discovery finds the necessary block devices and creates items that, by the name of the block device, take the desired value from the array.
I did not use Zabbix, but I use InfluxDB which comes with grafana.
Now I throw some things directly into influx like this:
resources,nodename=testnode-nodes-8a5qa cpu=5,ram=79
Plus, in this case, cpu and ram will have the same timestamp and the graphs will be more beautifully synchronized.
And it’s even interesting, does Zabbix support such a model, or does it have a different default flow?
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question