GLSL Programming/Unity/Projection of Bumpy Surfaces

This tutorial covers (single-step) parallax mapping.

It extends and is based on.

Improving Normal Mapping
The normal mapping technique presented in only changes the lighting of a flat surface to create the illusion of bumps and dents. If one looks straight onto a surface (i.e. in the direction of the surface normal vector), this works very well. However, if one looks onto a surface from some other angle (as in the image ), the bumps should also stick out of the surface while the dents should recede into the surface. Of course, this could be achieved by geometrically modeling bumps and dents; however, this would require to process many more vertices. On the other hand, single-step parallax mapping is a very efficient techniques similar to normal mapping, which doesn't require additional triangles but can still move virtual bumps by several pixels to make them stick out of a flat surface. However, the technique is limited to bumps and dents of small heights and requires some fine-tuning for best results.



Parallax Mapping Explained
Parallax mapping was proposed in 2001 by Tomomichi Kaneko et al. in their paper “Detailed shape representation with parallax mapping” (ICAT 2001). The basic idea is to offset the texture coordinates that are used for the texturing of the surface (in particular normal mapping). If this offset of texture coordinates is computed appropriately, it is possible to move parts of the texture (e.g. bumps) as if they were sticking out of the surface.

The illustration shows the view vector V in the direction to the viewer and the surface normal vector N in the point of a surface that is rasterized in a fragment shader. Parallax mapping proceeds in 3 steps:
 * Lookup of the height $$h$$ at the rasterized point in a height map, which is depicted by the wavy line on top of the straight line at the bottom in the illustration.
 * Computation of the intersection of the viewing ray in direction of V with a surface at height $$h$$ parallel to the rendered surface. The distance $$o$$ is the distance between the rasterized surface point moved by $$h$$ in the direction of N and this intersection point. If these two points are projected onto the rendered surface, $$o$$ is also the distance between the rasterized point and a new point on the surface (marked by a cross in the illustration). This new surface point is a better approximation to the point that is actually visible for the view ray in direction V if the surface was displaced by the height map.
 * Transformation of the offset $$o$$ into texture coordinate space in order to compute an offset of texture coordinates for all following texture lookups.

For the computation of $$o$$ we require the height $$h$$ of a height map at the rasterized point, which is implemented in the example by a texture lookup in the A component of the texture property, which should be a gray-scale image representing heights as discussed in. We also require the view direction V in the local surface coordinate system formed by the normal vector ($$z$$ axis), the tangent vector ($$x$$ axis), and the binormal vector ($$y$$ axis), which was also introduced. To this end we compute a transformation from local surface coordinates to object space with:

$$\mathrm{M}_{\text{surface}\to \text{object}} = \left[ \begin{matrix} T_x & B_x & N_x \\ T_y & B_y & N_y \\ T_z & B_z & N_z \end{matrix} \right]$$

where T, B and N are given in object coordinates. (In we had a similar matrix but with vectors in world coordinates.)

We compute the view direction V in object space (as the difference between the rasterized position and the camera position transformed from world space to object space) and then we transform it to the local surface space with the matrix $$\mathrm{M}_{\text{object}\to\text{surface}}$$ which can be computed as:

$$\mathrm{M}_{\text{object}\to \text{surface}} = \mathrm{M}_{\text{surface}\to \text{object}}^{-1} = \mathrm{M}_{\text{surface}\to \text{object}}^{T}\,$$

This is possible because T, B and N are orthogonal and normalized. (Actually, the situation is a bit more complicated because we won't normalize these vectors but use their length for another transformation; see below.) Thus, in order to transform V from object space to the local surface space, we have to multiply it with the transposed matrix $$(\mathrm{M}_{\text{surface}\to\text{object}})^{T}$$. In GLSL, this is achieved by multiplying the vector from the left to the matrix $$\mathrm{M}_{\text{surface}\to\text{object}}$$.

Once we have the V in the local surface coordinate system with the $$z$$ axis in the direction of the normal vector N, we can compute the offsets $$o_x$$ (in $$x$$ direction) and $$o_y$$ (in $$y$$ direction) by using similar triangles (compare with the illustration):

$$\frac{o_x}{h} = \frac{V_x}{V_z}$$ and $$\frac{o_y}{h} = \frac{V_y}{V_z}$$.

Thus:

$$o_x = h \frac{V_x}{V_z}$$ and $$o_y = h \frac{V_y}{V_z}$$.

Note that it is not necessary to normalize V because we use only ratios of its components, which are not affected by the normalization.

Finally, we have to transform $$o_x$$ and $$o_y$$ into texture space. This would be quite difficult if Unity wouldn't help us: the tangent attribute  is actually appropriately scaled and has a fourth component   for scaling the binormal vector such that the transformation of the view direction V scales $$V_x$$ and $$V_y$$ appropriately to have $$o_x$$ and $$o_y$$ in texture coordinate space without further computations.

Implementation
The implementation shares most of the code with. In particular, the same scaling of the binormal vector with the fourth component of the  attribute is used in order to take the mapping of the offsets from local surface space to texture space into account: In the vertex shader, we have to add a varying for the view vector V in the local surface coordinate system (with the scaling of axes to take the mapping to texture space into account). This varying is called. It is computed by multiplying the view vector in object coordinates from the left to the matrix $$\mathrm{M}_{\text{surface}\to\text{object}}$$  as explained above: The rest of the vertex shader is the same as for normal mapping, see.

In the fragment shader, we first query the height map for the height of the rasterized point. This height is specified by the A component of the texture. The values between 0 and 1 are transformed to the range - /2 to + with a shader property   in order to offer some user control over the strength of the effect (and to be compatible with the fallback shader): The offsets $$o_x$$ and $$o_y$$ are then computed as described above. However, we also clamp each offset between a user-specified interval - and   in order to make sure that the offset stays in reasonable bounds. (If the height map consists of more or less flat plateaus of constant height with smooth transitions between these plateaus,  should be smaller than the thickness of these transition regions; otherwise the sample point might be in a different plateau with a different height, which would mean that the approximation of the intersection point is arbitrarily bad.) The code is: In the following code, we have to apply the offsets to the texture coordinates in all texture lookups; i.e., we have to replace  (or equivalently  ) by , e.g.: The rest of the fragment shader code is just as it was for.

Complete Shader Code
As discussed in the previous section, most of this code is taken from. Note that if you want to use the code on a mobile device with OpenGL ES, make sure to change the decoding of the normal map as described in that tutorial.

The part about parallax mapping is actually only a few lines. Most of the names of the shader properties were chosen according to the fallback shader; the user interface labels are much more descriptive.

Summary
Congratulations! If you actually understand the whole shader, you have come a long way. In fact, the shader includes lots of concepts (transformations between coordinate systems, application of the inverse of an orthogonal matrix by multiplying a vector from the left to it, the Phong reflection model, normal mapping, parallax mapping, ...). More specifically, we have seen:
 * How parallax mapping improves upon normal mapping.
 * How parallax mapping is described mathematically.
 * How parallax mapping is implemented.