I
I
ixon2018-08-08 15:49:14
Programming
ixon, 2018-08-08 15:49:14

How do video editors work?

What path does an abstraction take before becoming a video file?
When searching for an answer to this question, it usually turns out to find only information about how one video format is converted to another. What about videos that don't come from other videos?
Let's say we have some data about geometric primitives, their color, their location at some key moments. For example,

A blue square of 20 x 20 pixels is located 10 pixels to the left and 25 pixels above from the leftmost point, after 5 seconds it changes its position moving away from the top-left edge down by 6 pixels and to the right by 12.

How is such data usually recorded in preparation for creating shots? What libraries are used to draw them pixel by pixel and eventually connect all the frames? What programming languages ​​are most often used for this?

Answer the question

In order to leave comments, you need to log in

2 answer(s)
M
Maxim Grishin, 2018-08-08
@ixon

In short, this is called rendering. It happens in 2D and 3D, with 2D primitives move directly across the screen area, with 3D a three-dimensional scene is drawn. For rendering 3D, special software is used: Maya, 3DS Max, a few more. Scene data is encoded in specific formats, motion data - separately, textures separately, materials ("blue" - what is it? Shiny or not, rough or not, if rough, then which one? These are all properties of the material) separately. Then the software collects all the information in the scene and starts ray tracing or checking each ray coming from the viewer to each pixel (or subpixel, i.e. several rays per pixel), where it hit, where it was reflected, where it was refracted, where it ended up flew away, and other effects like bump mapping, background lighting and many other things that can technically be on the stage.
If we are talking about game rendering, then the approach changes to rendering polygons instead of ray tracing - tracing is too expensive, and the frame rate needs to be kept at the level. Special effects, lighting, bump mapping and the like are implemented by shaders in GLSL / HLSL or something more advanced, the scene is encoded in BSP or another format specific to the game engine (BSP is the "grandfather" of three-dimensional scenes, used in Quake 1), objects inside scenes have their structures in the same format, and OpenGL, Direct3D/DirectX are used for drawing, and at the lower level - all the same C ++.
And if we are talking about how to assemble a video file later, then it is assembled from static pictures, like cartoons, for example, using some kind of video codec.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question