Cg Programming/Unity/Screen Overlays

This tutorial covers screen overlays.

It is the first tutorial of a series of tutorials on non-standard vertex transformations, which deviate from the standard vertex transformations that are described in. This particular tutorial uses texturing as described in and blending as described in.

Screen Overlays
There are many applications for screen overlays, e.g. titles as in the image, but also other GUI (graphical user interface) elements such as buttons or status information. The common feature of these elements is that they should always appear on top of the scene and never be occluded by any other objects. Neither should these elements be affected by any of the camera movements. Thus, the vertex transformation should go directly from object space to screen space. Unity has various ways to render a texture image at a specified position on the screen. This tutorial tries to achieve this purpose with a simple shader.

Rendering a Texture to the Screen with a Cg Shader
Let's specify the screen position of the texture by an  and a   coordinate of the lower, left corner of the rendered rectangle in pixels with $$(0,0)$$ at the center of the screen and a   and   of the rendered rectangle in pixels. (Specifying the coordinates relative to the center often allows us to support various screen sizes and aspect ratios without further adjustments.) We use these shader properties: and the corresponding uniforms For the actual object, we could use a mesh that consists of just two triangles to form a rectangle. However, we can also just use the default cube object since back-face culling (and culling of triangles that are degenerated to edges) allows us to make sure that only two triangles of the cube are rasterized. The corners of the default cube object have coordinates $$-0.5$$ and $$+0.5$$ in object space, i.e., the lower, left corner of the rectangle is at $$(-0.5, -0.5)$$ and the upper, right corner is at $$(+0.5, +0.5)$$. To transform these coordinates to the user-specified coordinates in screen space, we first transform them to raster positions in pixels where $$(0,0)$$ is at the lower, left corner of the screen: This transformation transforms the lower, left corner of the front face of our cube from $$(-0.5, -0.5)$$ in object space to the raster position, where   is the screen width in pixels and   is the height in pixels. The upper, right corner is transformed from $$(+0.5, +0.5)$$ to. Raster positions are convenient and, in fact, they are often used in OpenGL; however, they are not quite what we need here.

The output parameter of the vertex shader is in the so-called “clip space” as discussed in. The GPU transforms these coordinates to normalized device coordinates between $$-1$$ and $$1$$ by dividing them by the fourth coordinate in the perspective division. If we set this fourth coordinate to $$1$$, this division doesn't change anything; thus, we can think of the first three coordinates as coordinates in normalized device coordinates, where $$(-1,-1,-1)$$ specifies the lower, left corner of the screen on the near plane and $$(1,1,-1)$$ specifies the upper, right corner on the near plane. In order to specify any screen position as vertex output parameter, we have to specify it in this coordinate system. Fortunately, transforming the $$x$$ and $$y$$ coordinates of the raster position to normalized device coordinates is not too difficult. For the $$z$$ coordinate we want to use the coordinate of the near clipping plane. In Unity, this depends on the platform; therefore, we use Unity's built-in uniform  which specifies the $$z$$ coordinate of the near clipping plane. As you can easily check, this transforms the raster position  to normalized device coordinates $$(-1.0, -1.0)$$ and the raster position   to $$(1.0, 1.0)$$, which is exactly what we need.

There is one more complication: Sometimes Unity uses a flipped projection matrix where the $$y$$ axis points in the opposite direction. In this case, we have to multiply the $$y$$ coordinate with -1. We can achieve this by multiplying it with :

This is all we need for the vertex transformation from object space to screen space. However, we still need to compute appropriate texture coordinates in order to look up the texture image at the correct position. Texture coordinates should be between $$0.0$$ and $$1.0$$, which is actually easy to compute from the vertex coordinates in object space between $$-0.5$$ and $$+0.5$$: With the vertex output parameter, we can then use a simple fragment program to look up the color in the texture image and modulate it with the user-specified color  : That's it.

Complete Shader Code
If we put all the pieces together, we get the following shader, which uses the  queue to render the object after everything else, and uses alpha blending (see ) to allow for transparent textures. It also deactivates the depth test to make sure that the texture is never occluded: When you use this shader for a cube object, the texture image can appear and disappear depending on the orientation of the camera. This is due to clipping by Unity, which doesn't render objects that are completely outside of the region of the scene that is visible in the camera (the view frustum). This clipping is based on the conventional transformation of game objects, which doesn't make sense for our shader. In order to deactivate this clipping, we can simply make the cube object a child of the camera (by dragging it over the camera in the Hierarchy Window). If the cube object is then placed in front of the camera, it will always stay in the same relative position, and thus it won't be clipped by Unity. (At least not in the game view.)

Changes for Opaque Screen Overlays
Many changes to the shader are conceivable, e.g. a different blend mode or a different depth to have a few objects of the 3D scene in front of the overlay. Here we will only look at opaque overlays.

An opaque screen overlay will occlude triangles of the scene. If the GPU was aware of this occlusion, it wouldn't have to rasterize these occluded triangles (e.g. by using deferred rendering or early depth tests). In order to make sure that the GPU has any chance to apply these optimizations, we have to render the screen overlay first, by setting

Also, we should avoid blending by removing the  instruction. With these changes, opaque screen overlays are likely to improve performance instead of costing rasterization performance.

Summary
Congratulations, you have reached the end of another tutorial. We have seen:
 * How to render screen overlays with a Cg shader.
 * How to modify the shader for opaque screen overlays.