OpenGL Programming/Glescraft 3



= Introduction =

The voxels from the previous two tutorials look very boring, and it is hard to distinguish the voxels, since they have a uniform color and there is no attempt at lighting effects. In this tutorial, we will see how we can give the voxels distinct textures, using only the one texture coordinate we have. Furthermore, we will tweak the fragment shader in small ways to get a very big improvement in lighting of the scene.

= Texturing =

When we want to put a texture on our voxels, we do not want to give all the voxels in a chunk the same texture. However, if we render all the triangles in the chunk in one go, we cannot switch textures between voxels. So, we will have to use a texture atlas to store all the images we want to paint on the voxels in a single OpenGL texture. We have only one texture coordinate available, and it can only have 256 possible values. It would be nice if we could use it to point to one of 256 possible subimages in the texture atlas.

Suppose we have a texture atlas with 16 subimages. All subimages have the same size (SW x SH), but the exact size does not matter. The texture atlas will have all the subimages in a single row, so the texture atlas will have (SW * 16) x SH pixels. The blk[x][y][z] array should now contain values in the range of 0 to 15. In the fragment shader, we now have to create real texture coordinates from the varying vec4 texcoord that we got from the vertex shader. Clearly, our integer value from 0 to 15 is not enough. However, we do have the x, y, and z coordinates to play with. Since these are "varying", they will not contain the integer values from the VBO, but can have any value between the integer values, depending on how far between the vertices the fragment is. In particular, if we draw a quad from (0, 0, 0) to (1, 1, 0), the z coordinate will be 0 everywhere, but the x and y coordinates can take any value in between. To texture this quad, we could use the following fragment shader:

The division by 16 comes from the fact that we have 16 subimages in a row in our texture. Also, since the w coordinate is the same for all the vertices in the quad, it will have a constant value in the fragment shader. Remember that texcoord is just a copy of the uniform coord before the MVP matrix has been applied, so the texture will not be affected by any change in the MVP.

Of course, you will have already noticed that this shader does not work for quads with most other possible coordinates. First, to address any quad with corners (x, y, *) to (x + 1, y + 1, *), where x and y are integer coordinates, and * means any possible z, we can use the fract function to map the x coordinates back to the range 0 to 1:

We don't have to use fract on texcoord.y, since our texture atlas has only a single row of images, and OpenGL will take care of texture wrapping in the vertical direction. This shader works well for any voxel faces that point in the positive or negative z direction. But for these faces, all the vertices of a face have the same /integer/ value for the z coordinate. Therefore, we can safely add it to the x coordinate before applying the fract function:

The same arguments would hold for voxel faces pointing in the positive or negative y direction, so you can use the same shader for these faces! However, we cannot put the y coordinate in there as well. The only way to also correctly render the faces pointing in the positive or negative y direction is either to have two fragment shaders and render the x and z facing faces separate from the y facing faces, or to add some extra information to the vertices to be able to make a distinction in the fragment shader between the two cases. Since we only use 16 subimages in our texture, we only use 4 bits of the w coordinate. We can in fact use another bit as a very rudimentary "normal" vector. We will use negative w coordinates to indicate faces pointing in the y direction:

We don't have to worry about negative values for the w coordinate, as long as when creating the VBO, we substract a multiple of 16 from the value of <tt>blk[x][y][z]</tt> to make it negative. That way, the texture coordinate will be outside the range 0..1, but will be correctly wrapped by OpenGL to the right position in the texture atlas.

Exercises:
 * In the previous tutorial, we have seen that we can merge adjacent faces to reduce the number of triangles that needs to be drawn. Will the shader above still work in that case?

= Lighting =

Even with texturing, it can be hard to distinguish voxels. To give the scene a more natural look, and make it easier to distinguish which side of a voxel we are looking at, we will use the "normal" bit introduced in the previous section to make the sides of a voxel slightly darker. This simulates shading in the real world at noon, when the Sun is directly above.

= Fog =

In the real world, far away object look fainter and less colourful than objects right in front of you. This is due to the scattering of light by the atmosphere. It is almost the same effect as fog, the major difference is the strength. We can implement this in our fragment shader as follows:

At the top of the fragment shader, we define two constants. The fog color is the color an object would have when it would be very far away. The fog density controls the strength of the fog effect. A very small value represents atmospheric scattering or a light haze, a big value represents dense fog.

The distance of the fragment to the camera can be calculated by dividing <tt>gl_FragCoord.z</tt> by <tt>gl_FragCoord.w</tt>. The effect of fog goes exponential with the distance. The variable <tt>fog</tt> represents the fraction of real color that is "left" after the fog has been applied. The final fragment color is a mix between the original color and the fog color.

= Transparency =

Finally, you can make textures that have transparent pixels. Although you can use blending to apply transparent textures, it is also possible to simulate completely transparent pixels in the fragment shader. Right after calculating the <tt>color</tt>, you can discard the fragment based on the alpha value:

The <tt>discard</tt> keyword causes the fragment program to stop further processing (it is like a "return" statement in C). The advantage of this over blending is that the value of the z buffer will not be updated for transparent pixels. With blending, if you render a transparent triangle close to the camera, and then an opaque one behind it, the opaque one will not be drawn because it would fail the depth test.

Exercises:
 * Add transparency to a few of the subimages and look at the results.
 * Change the <tt>chunk::update</tt> function from the previous tutorial to properly handle partially transparent faces.
 * Try using blending instead of the discard keyword.