W
W
WebDev2019-07-31 10:59:18
Task Schedulers
WebDev, 2019-07-31 10:59:18

How to properly track the execution of cron scripts?

In my Laravel project, many scripts are run by cron.
Well, for example, something like this:

$users = User::all();

foreach ($users as $user) {
    try {
        $json = file_get_contents('vkapicom/xxx');
        $content = json_decode($json, true);
        usleep(500);
    } catch (\Exception $e) {
        Log::create(['message' => $e->getMessage()]);
        continue;
    }

    if (isset($content)) {
        //do something
        try {
            Post::create([/*...*/]);
        } catch (\Exception $e) {
            Log::create(['message' => $e->getMessage()]);
        }
    }
}

Log is my class that stores information about errors in the database, I then look at it in the admin panel. Well, that's about. Here are the problems I run into:
1. Some mistakes are repeated regularly. I make a request to the API, where I pass the user id. Not for all users values ​​are returned, for some api it returns 404 or 403 and this is normal. But such users are regularly checked and clog the log.
2. This is one script, there are a couple more, they have completely different logic, completely different errors. Basically, there is work with the API of various third-party services.
In general, I can not competently organize the monitoring of all this. Either some case appears, which was not provided, then the log turns into a mess of errors, which, in fact, are not errors, but a normal situation.
How do you monitor different cron scripts? Do you have a single log for them? Or maybe there is a log for each individual script? Or maybe it's better to just write all the output to Laravel's standard log?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
1
1210mk2, 2019-07-31
@1210mk2

What's the difference? what will the answer be?
on one of the projects where there is payment, I throw out the olpat log into a separate file, the rest is all poured into the Larovsky daily.
on the other - working out a heavy cron data collector - each instance of the collector into its own single file, sometimes archived manually.
on the third long-running collector, where it is important what the script is doing right now, a tree of segments with the properties that it was going to work on in this iteration and a beat for each ok / neok is displayed in its log. By properties, you can quickly write a query to the database with handles to check possible plugs.

A
Antonio Solo, 2019-08-30
@solotony

I try to put everything in one log, so as not to produce garbage.
into the account "clog the log". if you have something important that needs to be monitored, then this is no longer a log, but some kind of event system.
I have a log recording function, in addition to it, it also writes "system events" - that is, what is important for me as an administrator to observe. e.g. system errors

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question