V
V
Vitaliy2019-09-26 17:58:40
bash
Vitaliy, 2019-09-26 17:58:40

What is the best way to implement an HTTP response monitoring system?

I want to do something like this:
Every day, a script is launched that makes HTTP requests to the site's subdomains (dev.site.com, api.sit.com etc) and saves them to files ( (get_190926_1200, protocol_date_time ). Next, the script compares the new response with yesterday and if there are changes, then it does write to the log file.Now
I have such a structure

site1/
    dev.site1.com/
        get_xxx_xx
        post_xxx_xx
    api.site1.com/
        get_xxx_xx  
        post_xxx_xx
    tmp/
    archive/

New responses are first stored in tmp, then a check is made with the old one, and if there are changes, then the new response is moved to the appropriate folder, and the old one to the archive folder, if there are no changes, then the response is deleted from tmp.
Is it possible to implement this in a better and more convenient structure? Since archive files come from different subdomains, and adding a subdomain name to each file makes further verification more difficult (file names get long).
Perhaps there are some good tools for my task?
Thank you.

Answer the question

In order to leave comments, you need to log in

2 answer(s)
D
Dmitry, 2019-09-26
@q2digger

Zabbix, there is a tool called Web monitoring, it can check the response code, it can check the response time, you can do complex scripts.

M
mayton2019, 2019-09-29
@mayton2019

In general, it is not clear why you need /tmp and /archive, the task is completely solved on the basis of the structure that already exists.
You don't need to keep the entire history of answers from the "nativity of Christ"? Compare the current response on the fly.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question