D
D
Daniel2018-11-06 21:17:38
3D
Daniel, 2018-11-06 21:17:38

How are lighting and shadows calculated by 3d engines?

For example, a map of km by km, and 100 objects, and one light source somewhere in 100 by 100 meters of the area on the map (where there are 10 objects in total), how the engines calculate the lighting and visibility of the object (whether it is blocked), this light source and bypasses all other 100 senseless objects???
I know about the method of competing points, but I don’t understand how these rays that determine visibility can know that there is an object at a certain point in space, unless each of the 100 objects on each side is checked?

Answer the question

In order to leave comments, you need to log in

3 answer(s)
S
Stanislav Makarov, 2018-11-06
@daniil14056

this light source and all other 100 meaningless objects bypass???

Materiel: https://ru.wikipedia.org/wiki/%D0%94%D0%B2%D0%BE%D... . This is not so much about shadows, but about the issue of bypassing "meaningless objects".

R
rPman, 2018-11-06
@rPman

The method can be the same as the camera determines the visibility of the object, i.e. it is enough to run the same algorithm for each light source (taking into account the beam width and directivity, including the entire sphere). Of course, there are static objects and dynamic ones, incl. the light sources themselves. Static ones are generally calculated at the map generation stage, in advance.
Separately, a bitmap of the shadow of the object is built and then superimposed on the textures in the direction of the light that it hits. Literally, by applying translucent textures, or if shaders allow it, regenerate them on the fly.
At this stage, you can reduce the detail of the shadow, instead of building a detailed projection of the object, you can take its collision calculation area (it is usually an order of magnitude simpler and they try to fit the object well into it)
You can optimize locally, incl. at the map design stage, by limiting the number of lights that generate shadows for objects in specific areas, you are unlikely to be interested in lighting sources 100km from the object, at least the shadow from it can be simplified.

D
Deniskin Rediskin, 2018-11-07
@DenVdmj

First, everything that does not fit into the frame is discarded. As a rule, a static world and all its objects are represented as a tree of bounding volumes (each of which is divided into eight volumes), in which case it is done quite quickly (binary search, or rather, in this case, octal). Dynamic objects can be stored in a separate tree that is dynamically rebuilt when moving objects, or the main tree can be rebuilt (as a rule, objects will be moved to neighboring nodes, and this is quite fast).
The shadows themselves: the scene is drawn from the position of the light source, while the shader code does not draw the frame, since it is not needed: only the depth buffer of the scene is needed from the position of the light source. This depth buffer is then used as a texture that is projected onto all the objects in the scene, with the rendering shader using the data from the texture, making a shading test for each point. Due to the fact that we ourselves decide how the point is drawn in the pixel shader, the hardware method is still the most common today, in its most diverse versions.
But these are direct shadows from the light source, in addition to them, there is also global illumination (shading), SSAO, implemented by analyzing the depth map of the current frame (in 2D frame space, which is quite fast).
In more detail, you can probably google like this:
on the first point: octree scene graphs, frustum culling, occlusion culling,
lighting: shadow buffers, SSAO.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question