S
S
sanex33392016-01-15 21:40:55
Java
sanex3339, 2016-01-15 21:40:55

RayTracing - what should be the formula for correct surface shading at different angles?

Good evening. I am making PathTracer and ran into a problem:
to calculate the shadows of direct lighting, I get for each pixel its illumination using the formula:

return emissionColor // RGB(255, 255, 255)
                .scale(lightPower - rayLine.getLength() * (lightPower / fadeRadius)) // затухание с увеличением расстояния
                .scale(lambertCos); // 0 - 1

In this code:
  • lightPower - the strength of the light source (in the example - 2)
  • rayLine.getLength() ray length from the point for which we calculate the illumination to a random point on the light source (for diffused shadows)
  • fadeRadius - the distance from which the light from the light source completely fades out

lambertCos - calculated from the formula
0 - Vector.dot( // 0  - специально, что бы было заметно отрицательное число
        lightDirection, // вектор из точки, для которой рассчитываем освещенность, к источнику света, <b>нормализованный</b>
        intersection.getNormal() - нормаль точки, для которой рассчитываем освещенность
 );

Let's take a scene - a cube, the upper plane of the cube - a light source.
With the above formulas, the cube is illuminated correctly.
Then we take the upper plane, compress it 2 times in width and length and lower it, for example, to the middle of the cube.
A jamb will appear, which is visible in the image - an absolutely sharp transition from shadow to light at points located at a close distance to the plane of the light source.
0a5541a7ac.jpg
To eliminate this, we will also multiply the illumination by the angle between the direction of the beam from the point for which we calculate the illumination, and the normal on the light source.
surfaceCost = Vector.dot(
        lightDirection,
        shadowRay.getNormal() // нормаль в точке пересечения луча в точке на источнике света
);

return emissionColor // RGB(255, 255, 255)
                .scale(lightPower - rayLine.getLength() * (lightPower / fadeRadius)) // затухание с увеличением расстояния
                .scale(lambertCos) // 0 - 1
                .scale(surfaceCos); // 0 - 1

The result will be what you need, it will look like 90 percent, as in this picture
cornellBox_matteAndGloss.png
. The picture shows that if the light source does not occupy the entire surface of the ceiling, then it gives a slight shading on the walls under the ceiling.
Well, now the problem is:
let's stretch the light source again to the size of the entire ceiling - as a result, the shading under the light source will still remain, because. the rays in the shaded areas will still be at a strong angle to the normal of the light source.
Here's what it looks like (top left)
f7033290c1.jpg
I tried to bind surfaseCos to the distance between a point on the surface and a point on the light source:
surfaseCos = 1 - (surfaseCos * (rayline.getDist() * fadeRadius))

but for some reason the result is still wrong. In the last picture, there should be no shadows on the red wall under the IC, but on the white wall they should appear sharp at first, then become more and more blurry as soon as the IC starts to move away from the wall (narrow), go down.
In general - does anyone have any ideas on my problem?

Answer the question

In order to leave comments, you need to log in

1 answer(s)
S
sanex3339, 2016-01-16
@sanex3339

Well, I sort of fixed it.
The bottom line turned out to be that my final image is assembled from passes (for each pixel):
surface color, pass - black and white direct lighting mask, direct lighting pass (combined 1st 2 passes), ambient occlusion pass and global illumination pass .
When calculating the global illumination pass, I recursively take the global illumination data for the next iterations.
And then there was a bug: if the point for which the illumination was calculated was a light source, I simply returned the source color (emission color), but I also had to assign the necessary colors to the corresponding pass variables, because the variable data of each pass is used in previous iterations:

this.surfaceColor = intersection.getOwner().getMaterial().getEmission().getEmissionColor();
            this.lightSamplingColor = RGBColor.WHITE;
            this.ambientOcclusionColor = this.lightSamplingColor;
            this.globalIlluminationColor = this.surfaceColor;

            return this.surfaceColor;

Here are the resulting renders with the same settings:
Now I would like to understand what kind of stupid streaks on the spheres that have been taking place since the very first versions of PathTracer.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question