Cg Programming/Unity/Silhouette Enhancement

This tutorial covers the transformation of surface normal vectors. It assumes that you are familiar with alpha blending as discussed in and with shader properties as discussed in.

The objective of this tutorial is to achieve an effect that is visible in the photo : the silhouettes of semitransparent objects tend to be more opaque than the rest of the object. This adds to the impression of a three-dimensional shape even without lighting. It turns out that transformed normals are crucial to obtain this effect.



Silhouettes of Smooth Surfaces
In the case of smooth surfaces, points on the surface at silhouettes are characterized by normal vectors that are parallel to the viewing plane and therefore orthogonal to the direction to the viewer. In the figure, the blue normal vectors at the silhouette at the top of the figure are parallel to the viewing plane while the other normal vectors point more in the direction to the viewer (or camera). By calculating the direction to the viewer and the normal vector and testing whether they are (almost) orthogonal to each other, we can therefore test whether a point is (almost) on the silhouette.

More specifically, if V is the normalized (i.e. of length 1) direction to the viewer and N is the normalized surface normal vector, then the two vectors are orthogonal if the dot product is 0: V·N = 0. In practice, this will rarely be the case. However, if the dot product V·N is close to 0, we can assume that the point is close to a silhouette.

Increasing the Opacity at Silhouettes
For our effect, we should therefore increase the opacity $$\alpha$$ if the dot product V·N is close to 0. There are various ways to increase the opacity for small dot products between the direction to the viewer and the normal vector. Here is one of them (which actually has a physical model behind it, which is described in Section 5.1 of this publication) to compute the increased opacity $$\alpha'$$ from the regular opacity $$\alpha$$ of the material:

$$\alpha'=\min\left(1, \frac{\alpha}{\left\vert\mathbf{V}\cdot\mathbf{N}\right\vert}\right)$$

It always makes sense to check the extreme cases of an equation like this. Consider the case of a point close to the silhouette: V·N ≈ 0. In this case, the regular opacity $$\alpha$$ will be divided by a small, positive number. (Note that GPUs usually handle the case of division by zero gracefully; thus, we don't have to worry about it.) Therefore, whatever $$\alpha$$ is, the ratio of $$\alpha$$ and a small positive number, will be larger. The $$\min$$ function will take care that the resulting opacity $$\alpha'$$ is never larger than 1.

On the other hand, for points far away from the silhouette we have V·N ≈ 1. In this case, α' ≈ min(1, α) ≈ α; i.e., the opacity of those points will not change much. This is exactly what we want. Thus, we have just checked that the equation is at least plausible.

Implementing an Equation in a Shader
In order to implement an equation like the one for $$\alpha$$ in a shader, the first question should be: Should it be implemented in the vertex shader or in the fragment shader? In some cases, the answer is clear because the implementation requires texture mapping, which is often only available in the fragment shader. In many cases, however, there is no general answer. Implementations in vertex shaders tend to be faster (because there are usually fewer vertices than fragments) but of lower image quality (because normal vectors and other vertex attributes can change abruptly between vertices). Thus, if you are most concerned about performance, an implementation in a vertex shader is probably a better choice. On the other hand, if you are most concerned about image quality, an implementation in a fragment shader might be a better choice. The same trade-off exists between per-vertex lighting (i.e. Gouraud shading, which is discussed in ) and per-fragment lighting (i.e. Phong shading, which is discussed in ).

The next question is: in which coordinate system should the equation be implemented? (See for a description of the standard coordinate systems.) Again, there is no general answer. However, an implementation in world coordinates is often a good choice in Unity because many uniform variables are specified in world coordinates. (In other environments implementations in view coordinates are very common.)

The final question before implementing an equation is: where do we get the parameters of the equation from? The regular opacity $$\alpha$$ is specified (within a RGBA color) by a shader property (see ). The normal vector  is a standard vertex input parameter (see ). The direction to the viewer can be computed in the vertex shader as the vector from the vertex position in world space to the camera position in world space, which is provided by Unity.

Thus, we only have to transform the vertex position and the normal vector into world space before implementing the equation. The transformation matrix  from object space to world space and its inverse   are provided by Unity as discussed in. The application of transformation matrices to points and normal vectors is discussed in detail in. The basic result is that points and directions are transformed just by multiplying them with the transformation matrix, e.g. with  set to  : On the other hand normal vectors are transformed by multiplying them with the transposed inverse transformation matrix. Since Unity provides us with the inverse transformation matrix (which is ), a better alternative is to multiply the normal vector from the left to the inverse matrix, which is equivalent to multiplying it from the right to the transposed inverse matrix as discussed in :

Now we have all the pieces that we need to write the shader.

Shader Code
The assignment to  is an almost literal translation of the equation

$$\alpha'=\min\left(1, {\alpha} / {\left\vert\mathbf{V}\cdot\mathbf{N}\right\vert}\right)$$

Note that we normalize the vertex output parameters  and   in the vertex shader (because we want to interpolate between directions without putting more nor less weight on any of them) and at the begin of the fragment shader (because the interpolation can distort our normalization to a certain degree). However, in many cases the normalization of  in the vertex shader is not necessary. Similarly, the normalization of  in the fragment shader is in most cases unnecessary.

More Artistic Control
While the described silhouette enhancement is based on a physical model, it lacks artistic control; i.e., a CG artist cannot easily create a thinner or thicker silhouette than the physical model suggests. To allow for more artistic control, you could introduce another (positive) floating-point number property and take the dot product |V·N| to the power of this number (using the built-in Cg function ) before using it in the equation above. This will allow CG artists to create thinner or thicker silhouettes independently of the opacity of the base color.

Summary
Congratulations, you have finished this tutorial. We have discussed:
 * How to find silhouettes of smooth surfaces (using the dot product of the normal vector and the view direction).
 * How to enhance the opacity at those silhouettes.
 * How to implement equations in shaders.
 * How to transform points and normal vectors from object space to world space (using the transposed inverse model matrix for normal vectors).
 * How to compute the viewing direction (as the difference from the camera position to the vertex position).
 * How to interpolate normalized directions (i.e. normalize twice: in the vertex shader and the fragment shader).
 * How to provide more artistic control over the thickness of silhouettes.