OpenGL Programming/Glescraft 7



= Introduction =

Already we have seen that we can greatly reduce the number of vertices to draw in a voxel world by only drawing visible cube faces, and even by merging adjacent faces. We can further reduce the amount of data to send to the GPU by using a geometry shader. The idea is to send the most compact representation of a voxel face to the geometry shader, and have it produce the six vertices (for the two triangles that make up a face) and any other data that we might need. We could represent a whole voxel with one vertex, but that would mean the geometry shader does not know which sides to render, so it will render all six faces, whether they are obscured or not. It is likely that drawing the obscured faces takes more GPU time than the time saved dealing with vertices by such a geometry shader. A better way is to send two vertices (A and C in the diagram below) for each face to the vertex shader from two diametrically opposed corners of that face. This way we can also represent merged faces. Knowing that the face is rectangular and lies in an x, y or z-plane, we can reconstruct the other two corners (B and D), and from the four corners we can create two triangles in a strip (BAC and ACD). It is also possible to reconstruct the normal of the face this way (using the cross product of the lines AC and AB), which we can use for lighting calculations.



Before, we had to pass the same texture coordinate in all six vertices of a face. With the geometry shader, the shader has access to both input vertices at the same time, so we only need to pass the texture coordinate in one of them. The shader can copy it to all six output vertices. That also means we can use the w coordinate of the second input vertex for something else, for example intensity information.

= Enabling geometry shading =

Before using a geometry shader, we can check if it is actually supported by your GPU using GLEW:

We compile and link the geometry shader the same way as with the vertex and fragment shaders, except that we need to tell OpenGL what kind of input the geometry shader expects, and what kind of output it generates. In our case, it expects LINES as input, and produces TRIANGLE_STRIPS as output. It is done as follows:

When we draw, we just act like we draw GL_LINES, the GPU will take care of the rest.

= Creating vertices for the geometry shader =

Before, in our update function, we had to generate six vertices, like so (for the faces that are viewed from the negative x direction):

We can simply modify that piece of code to generate the two vertices for our geometry shader:

Notice how we pass intensity information in the second vertex.

= Shaders =

The geometry shader is as follows:

The vertex shader has nothing to do but pass on the vertices calculated by the geometry shader:

The fragment shader is as follows:

= Exercises =


 * Try different ways of assigning intensities to voxels.
 * The fragment shader still contains an if-statement to remap the texture coordinates. Can we move that to the geometry shader?
 * Is the order of the two input vertices important?