Answer the question
In order to leave comments, you need to log in
How to create an IP camera based on a stereo camera connected to a Jetson Nano?
There is a ZED Mini stereo camera, which itself is an IP camera (the frame from the left sensor is joined to the frame from the right sensor and a single double-width frame is obtained). It is connected to the Jetson Nano and Jetson Nano using the ZED SDK library and reads frame by frame as well as pixel distance information. And with the help of a shader, it modifies each next frame.
Now we need to make sure that the output from Jetson Nano has such a video stream that the ZED Mini + Jetson Nano system is perceived from the outside as an ip-camera that has a URL from which you can read the video (modified by the shader) that the ZED Mini shoots.
I note that the Jetson Nano has a good NVIDIA video card, in which you can hardware encode the video stream in the form of h.264 (it is also possible in the form of h.265, but the external device that will receive the video stream runs on Android, so you can only use h .264).
Question : How can I find a tutorial/tutorial to write a program in Ubuntu myself that will spawn an ip-camera (in C++ or in a pinch in Python)?
I don’t understand anything about this and have been rummaging through the Internet for a long time to find descriptions or tutorials on the topic of software for homemade ip cameras ... And I don’t find anything at all. Information either for specialists or on another topic (for example, about the nuances of h.264 algorithms / standard). I have sample programs written with the ZED SDK (https://github.com/stereolabs/zed-examples/tree/ma... https://github.com/stereolabs/zed-examples/tree/ma... but I don't understand how they work. I have been trying to figure out shaders for a long time, but also without success. Quantum mechanics was easier for me at one time :( I have a good programming experience in 1C:Enterprise 8.2, but here is a different specificity.
I always thought that it was not difficult to find information using Google. But on this issue ... some kind of conspiracy In principle, some experts are ready to help me, but I need to be prepared to adequately take their advice.That's why I'm looking for a textbook / tutorial.
Jetson Nano in my hands, it successfully displays the camera "from under" the Raspberry Pi on the monitor. But for a stereo camera and the use of shaders, much more knowledge is needed.
Thanks in advance to everyone who responds! If the startup launches (when the device "comes to life" it will definitely launch, since the market has already been tested using a test version of the equipment), then all the assistants will be able to turn to me for help (you just need to refer to this discussion).
PS: I forgot to say. The "deep root" of my interest is the lack of money. Therefore, a lot of things have to be done on their own.
Answer the question
In order to leave comments, you need to log in
From the stream of consciousness I caught that you need to blind the video stream from the frame stream. If so, check out ffmpeg. For jetson nano it seems to be. You can output an rtsp stream to the output.
Yes, absolutely exactly, "from the stream of frames you need to blind the video stream." Just now I started looking at ffmpeg (it should work on jetson nano). But your clarification about the "rtsp-stream" output is very important. Maybe it will solve the problem. Thanks a lot! I'm going to watch...
However, on the page https://github.com/131/h264-live-player I found a statement that "There is no solution for "real time" mp4 video creation / playback (ffmpeg, mp4box.js , mp4parser - boxing takes time)". I translated the expression "boxing takes time" as "packing (i.e. encoding) takes time", that is, in other words, it slows down. For my project, a delay above 0.3, a maximum of 0.5 seconds is undesirable. Author this page, in order to remove the braking, offers its own solution for the "raspberry pi + raspberry pi camera" system. On the other hand, jetson nano (unlike raspberry pi) supports encoding in hardware, which ffmpeg should (in theory) use. And therefore braking is not From a third party, the author of this page offers his solution for the raspberry pi camera, which I will
also try to understand (although it is unlikely that I will succeed).
Didn't find what you were looking for?
Ask your questionAsk a Question
731 491 924 answers to any question