GLSL Programming/Blender/Lighting of Bumpy Surfaces

This tutorial covers normal mapping.

It's the first of two tutorials about texturing techniques that go beyond two-dimensional surfaces (or layers of surfaces). In this tutorial, we start with normal mapping, which is a very well established technique to fake the lighting of small bumps and dents — even on coarse polygon meshes. The code of this tutorial is based on the tutorial on smooth specular highlights and the tutorial on textured spheres.

Perceiving Shapes Based on Lighting
The painting by Caravaggio that is depicted to the left is about the incredulity of Saint Thomas, who did not believe in Christ's resurrection until he put his finger in Christ's side. The furrowed brows of the apostles not only symbolize this incredulity but clearly convey it by means of a common facial expression. However, why do we know that their foreheads are actually furrowed instead of being painted with some light and dark lines? After all, this is just a flat painting. In fact, viewers intuitively make the assumption that these are furrowed instead of painted brows — even though the painting itself allows for both interpretations. The lesson is: bumps on smooth surfaces can often be convincingly conveyed by the lighting alone without any other cues (shadows, occlusions, parallax effects, stereo, etc.).

Normal Mapping
Normal mapping tries to convey bumps on smooth surfaces (i.e. coarse triangle meshes with interpolated normals) by changing the surface normal vectors according to some virtual bumps. When the lighting is computed with these modified normal vectors, viewers will often perceive the virtual bumps — even though a perfectly flat triangle has been rendered. The illusion can certainly break down (in particular at silhouettes) but in many cases it is very convincing.

More specifically, the normal vectors that represent the virtual bumps are first encoded in a texture image (i.e. a normal map). A fragment shader then looks up these vectors up in the texture image and computes the lighting based on them. That's about it. The problem, of course, is the encoding of the normal vectors in a texture image. There are different possibilities and the fragment shader has to be adapted to the specific encoding that was used to generate the normal map.



Normal Mapping in Blender
Normal maps are supported by Blender; see the description in the Blender 3D: Noob to Pro wikibook. Here, however, we will use the normal map to the left and write a GLSL shader to use it.

For this tutorial, you should use a cube mesh instead of the UV sphere that was used in the tutorial on textured spheres. Apart from that you can follow the same steps to assign a material and the texture image to the object. Note that you should specify a default UV Map in the Properties window > Object Data tab. Furthermore, you should specify Coordinates > UV in the Properties window > Textures tab > Mapping.

When decoding the normal information, it would be best to know how the data was encoded. However, there are not so many choices; thus, even if you don't know how the normal map was encoded, a bit of experimentation can often lead to sufficiently good results. First of all, the RGB components are numbers between 0 and 1; however, they usually represent coordinates between -1 and 1 in a local surface coordinate system (since the vector is normalized, none of the coordinates can be greater than +1 or less than -1). Thus, the mapping from RGB components to coordinates of the normal vector n $$= (n_x,n_y,n_z)$$ could be:

$$n_x = 2 R - 1$$,  $$n_y = 2 G - 1$$,    and   $$n_z = 2 B - 1$$

However, the $$n_z$$ coordinate is usually positive (because surface normals are not allowed to point inwards). This can be exploited by using a different mapping for $$n_z$$:

$$n_x = 2 R - 1$$,  $$n_y = 2 G - 1$$,    and   $$n_z = B$$

If in doubt, the latter decoding should be chosen because it will never generate surface normals that point inwards. Furthermore, it is often necessary to normalize the resulting vector.

An implementation in a fragment shader that computes the normalized vector n $$= (n_x,n_y,n_z)$$ in the variable  could be:

Usually, a local surface coordinate systems for each point of the surface is used to specify normal vectors in the normal map. The $$z$$ axis of this local coordinates system is given by the smooth, interpolated normal vector N and the $$x-y$$ plane is a tangent plane to the surface as illustrated in the image to the left. Specifically, the $$x$$ axis is specified by the tangent attribute T that Blender provides to vertices (see the discussion of attributes in the tutorial about debugging of shaders). Given the $$x$$ and $$z$$ axis, the $$y$$ axis can be computed by a cross product in the vertex shader, e.g. B = T × N. (The letter B refers to the traditional name “binormal” for this vector.)

Note that the normal vector N is transformed with the transpose of the inverse model-view matrix from object space to view space (because it is orthogonal to a surface; see “Applying Matrix Transformations”) while the tangent vector T specifies a direction between points on a surface and is therefore transformed with the model-view matrix. The binormal vector B represents a third class of vectors which are transformed differently. (If you really want to know: the skew-symmetric matrix B corresponding to “B×” is transformed like a quadratic form.) Thus, the best choice is to first transform N and T to view space, and then to compute B in view space using the cross product of the transformed vectors.

Also note that the configuration of these axes depends on the tangent data that is provided, the encoding of the normal map, and the texture coordinates. However, the axes are practically always orthogonal and a bluish tint of the normal map indicates that the blue component is in the direction of the interpolated normal vector.

With the normalized directions T, B, and N in view space, we can easily form a matrix that maps any normal vector n of the normal map from the local surface coordinate system to view space because the columns of such a matrix are just the vectors of the axes; thus, the 3×3 matrix for the mapping of n to view space is:

$$\mathrm{M}_{\text{surface}\to \text{view}} = \left[ \begin{matrix} T_x & B_x & N_x \\ T_y & B_y & N_y \\ T_z & B_z & N_z \end{matrix} \right]$$

These calculations are performed by the vertex shader, for example this way:

In the fragment shader, we multiply this matrix with n (i.e. ). For example, with this line:

With the new normal vector in view space, we can compute the lighting as in the tutorial on smooth specular highlights.

Complete Shader Code
The complete fragment shader simply integrates all the snippets and the per-pixel lighting from the tutorial on smooth specular highlights. Also, we have to request tangent attributes and set the texture sampler (make sure that the normal map is in the first position of the list of textures or adapt the second argument of the call to ). The Python script is then:

Summary
Congratulations! You finished this tutorial! We have look at:
 * How human perception of shapes often relies on lighting.
 * What normal mapping is.
 * How to decode common normal maps.
 * How a fragment shader can decode a normal map and use it for per-pixel lighting.