D
D
Daniel2020-04-30 18:34:45
3D
Daniel, 2020-04-30 18:34:45

How to implement texturing?

I study 3d graphics, I implement everything from 0, I ran into a question that I can’t understand how the implementation of texturing and vertex binding works. It is especially not clear how to optimally connect the face and texture area in Athenian transformations.
I don’t even understand where to start, and I can’t find information. During implementation, questions arise that hint that something is wrong.
Immediately questions:
Let's say the texture is square, the upper half is white, the lower half is black, what color will the surface of one pixel of this texture be?
Question, my first implementation could do this, the same texture, completely black, but there is a thin white line of one pixel, when transforming the polygon into a line (rotated by 90 degrees) or zoomed to 1 pixel, it turned out that it was white. not black.

Then the question is, the model consists of 1000 visible polygons, but it is very far and takes 10 pixels on the screen, the question is how to process it optimally, because 100 triangles will point to 1 pixel.

In general, the algorithm itself is not clear at all.

Answer the question

In order to leave comments, you need to log in

1 answer(s)
A
Alexander Pavlyuk, 2020-05-01
@pav5000

You need to read what is texture coordinates, MIP-texturing, texture filtering.
If very simplified, then each vertex of the triangle has a pair of texture coordinates (u and v) , it's like x and y, only for the texture. They talk about where in the texture image this vertex is located. These coordinates are usually between 0 and 1, where 0 is zero in the image and 1 is the width or height of the image (depending on whether it's u or v).
Further, when rasterizing this triangle, for each rendered pixel, these coordinates are interpolated. After interpolation, we get the coordinates of the point in the texture image that corresponds to the given location on the rendered triangle. This is where mipmaps
come into play.(this is just our texture with different resolutions), for example, the texture source is 512x512, mipmaps for it will be 256x256, 128x128, 64x64, etc. up to 1x1. Which mip-map to take information from is determined by the distance from the camera to a given point of the triangle. The farther the point, the smaller the resolution of the mipmap is taken for optimization.
When a point is chosen, texture filtering comes into play, there are several different algorithms. Their goal is to smooth transitions between texture pixels (texels), these algorithms are responsible for which point color to choose if we hit not exactly on the texel, but between texels (which happens almost always). The simplest one is nearest, it just takes the color of the nearest texel.

Didn't find what you were looking for?

Ask your question

Ask a Question

731 491 924 answers to any question