Answer the question
In order to leave comments, you need to log in
How and how can you make round-the-clock uninterrupted video recording of the pages of a certain site (for more details, see the description)?
Hello specialists!
It is required to solve the following problem:
There is a certain site, on 2 pages of which some information we need is constantly changing (not a video stream).
You need to record everything that happens on these pages of the site (not necessarily the entire screen, in fact, you only need a specific area that occupies about 40% of the screen) with the ability to upload to your desktop PC (without stopping recording and losing frames) of any time interval (from an hour to , maximum, approximately, 7 days) for fast-forward-view-analysis.
The grabber (it seems to be a suitable name, right?) should function around the clock without interruptions and failures, i.e. be able to install and operate on a remote server (VPN, VPS); control, connection and unloading of the same parts of the recorded - from a stationary PC. The audio stream does not need to be recorded. The grabber must independently save a recording day with the correct file name in video format (.avi, .mp4, .wmv...) in the recording storage (this is in addition to snatching any fragment from the current recording day).
I have already climbed these Internets of yours , and found only such similar examples and solutions:
https://github.com/rdp/screen-capture-recorder-to-... - "screen capture" device and recorder
habrahabr.ru/ post/196598- Photo surveillance or timelapse video on Raspberry Pi
habrahabr.ru/post/208788 - Seamless splitting and merging of video
using
DirectShow Who faced such problems?
Is it possible to make such a grabber for a little IT noob? If yes, how?
Who will undertake (well, what if?) for the implementation of this project? Or at least help to competently draw up a technical task for your colleagues?
Thanks in advance!
Answer the question
In order to leave comments, you need to log in
That's for nothing you do not need to write a site in a video file, if initially it is not a video file. It's like removing a tooth through the eye, making sure to visit the heel along the way.
You just need to parse the site markup, find the right piece, and load it using cron. At least by curl, at least by wget. Fault tolerance is tritely achieved by running a bunch of rocking instances.
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question