Answer the question
In order to leave comments, you need to log in
How to stream video from IP camera to OpenCV in the cloud?
An IP camera is connected to my home network. There is a cloud server with a GPU that runs a script for recognizing objects in video. I need to get a video stream from my camera in order to stream it to cv2.VideoCapture()
.
What is the best way to implement the transfer of the stream to the cloud (with the lowest possible delay)?
PS With the materiel of streaming, I am superficially familiar. If I locally connect to the RTSP of my camera, I get a video lag of 2-3 seconds, which is not entirely acceptable. So far I have found a solution based on the ImageZMQ library (based on the ZMQ library for asynchronous message passing in distributed systems) - is it worth digging in this direction, or are there simpler solutions?
Answer the question
In order to leave comments, you need to log in
If you have an ordinary IP camera and there is firmware from the manufacturer, then RTSP ... I don’t think that without altering the firmware you will be able to shoot video faster ... because. you understand, while the camera took off, while it cached, while it gave RTSP ...
Actually, for IP systems, a lag of several seconds is quite normal for transmission over the network. As far as I know, large vendors are now embedding recognition modules directly into cameras ...
Well, or in the old fashioned way - they give the DVR a stream and it is already raking it.
And what's the problem with a delay of several seconds? isn't that acceptable?
For video transmission over the Internet, 2-3 is an acceptable value, with all the optimizations you will not get synchronization of less than 1 second.
Before discarding rtsp, learn how to cook it.
1. Check the buffering settings on the receiving side. In VLC, for example, the default buffer is a few seconds.
2. I-frame interval. The h264 codec setting responsible for interframe compression. With a frame rate of 25, setting the gop-group length to 5 speeds up somewhat. But this increases traffic or degrades video quality while maintaining the bitrate.
3. Well, use udp transmission.
If you are solving a time-critical task (something industrial for example), then a CCTV camera is not your choice, the delay often increases to 1 second even at the stage of video processing with a dead SoIC, when forming a container stream. Ilya EfimovI said everything correctly about buffers and the I-frame interval, I’ll add that there is a buffer on the camera side, and depending on the richness of the developer’s imagination, there is either a queue of B and M frames and other things up to the previous I, or just 32 frames always, or up to a couple of I frames back - in order for any video analytics and quality processing to work and not eat up memory twice. On some cameras this is paralleled and does not affect buffering, on others it significantly reduces the delay completely, like the possibility of motion detection, image enhancement (3dnr and so on should be performed on raw video, but nothing prevents the Chinese from reducing the cost of DSP), all types of "intelligent video analytics ", watermarks and OSD. Can be upgraded to 1 second. Well, choose UDP, as a rule, you need to add a construction defined for your camera to the RTSP call, they are googled,
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question