GLSL Programming/Unity/Screen Overlays

This tutorial covers screen overlays, which are also known as “GUI Textures” in Unity.

It is the first tutorial of a series of tutorials on non-standard vertex transformations, which deviate from the standard vertex transformations that are described in. This particular tutorial uses texturing as described in and blending as described in.

Unity's GUI Textures
There are many applications for screen overlays (i.e. GUI textures in Unity's terminology), e.g. titles as in the image, but also other GUI (graphical user interface) elements such as buttons or status information. The common feature of these elements is that they should always appear on top of the scene and never be occluded by any other objects. Neither should these elements be affected by any of the camera movements. Thus, the vertex transformation should go directly from object space to screen space. Unity's GUI textures allow us to render this kind of element by rendering a texture image at a specified position on the screen. This tutorial tries to reproduce the functionality of GUI textures with the help of shaders. Usually, you would still use GUI textures instead of such a shader; however, the shader allows for a lot more flexibility since you can adapt it in any way you want while GUI textures only offer a limited set of possibilities. (For example, you could change the shader such that the GPU spends less time on rasterizing the triangles that are occluded by an opaque GUI texture.)

Simulating GUI Textures with a GLSL Shader
The position of Unity's GUI textures is specified by an  and a   coordinate of the lower, left corner of the rendered rectangle in pixels with $$(0,0)$$ at the center of the screen and a   and   of the rendered rectangle in pixels. To simulate GUI textures, we use similar shader properties: and the corresponding uniforms For the actual object, we could use a mesh that consists of just two triangles to form a rectangle. However, we can also just use the default cube object since back-face culling (and culling of triangles that are degenerated to edges) allows us to make sure that only two triangles of the cube are rasterized. The corners of the default cube object have coordinates $$-0.5$$ and $$+0.5$$ in object space, i.e., the lower, left corner of the rectangle is at $$(-0.5, -0.5)$$ and the upper, right corner is at $$(+0.5, +0.5)$$. To transform these coordinates to the user-specified coordinates in screen space, we first transform them to raster positions in pixels where $$(0,0)$$ is at the lower, left corner of the screen: This transformation transforms the lower, left corner of the front face of our cube from $$(-0.5, -0.5)$$ in object space to the raster position, where   is the screen width in pixels and   is the height in pixels. The upper, right corner is transformed from $$(+0.5, +0.5)$$ to. Raster positions are convenient and, in fact, they are often used in OpenGL; however, they are not quite what we need here.

The output of the vertex shader in  is in the so-called “clip space” as discussed in. The GPU transforms these coordinates to normalized device coordinates between $$-1$$ and $$1$$ by dividing them by the fourth coordinate  in the perspective division. If we set this fourth coordinate to $$1$$, this division doesn't change anything; thus, we can think of the first three coordinates of  as coordinates in normalized device coordinates, where $$(-1,-1,-1)$$ specifies the lower, left corner of the screen on the near plane and $$(1,1,-1)$$ specifies the upper, right corner on the near plane. (We should use the near plane to make sure that the rectangle is in front of everything else.) In order to specify any screen position in, we have to specify it in this coordinate system. Fortunately, transforming a raster position to normalized device coordinates is not too difficult: As you can easily check, this transforms the raster position  to normalized device coordinates $$(-1.0, -1.0)$$ and the raster position   to $$(1.0, 1.0)$$, which is exactly what we need.

This is all we need for the vertex transformation from object space to screen space. However, we still need to compute appropriate texture coordinates in order to look up the texture image at the correct position. Texture coordinates should be between $$0.0$$ and $$1.0$$, which is actually easy to compute from the vertex coordinates in object space between $$-0.5$$ and $$+0.5$$: With the varying variable, we can then use a simple fragment program to look up the color in the texture image and modulate it with the user-specified color  : That's it.

Complete Shader Code
If we put all the pieces together, we get the following shader, which uses the  queue to render the object after everything else, and uses alpha blending (see ) to allow for transparent textures. It also deactivates the depth test to make sure that the texture is never occluded: When you use this shader for a cube object, the texture image can appear and disappear depending on the orientation of the camera. This is due to clipping by Unity, which doesn't render objects that are completely outside of the region of the scene that is visible in the camera (the view frustum). This clipping is based on the conventional transformation of game objects, which doesn't make sense for our shader. In order to deactivate this clipping, we can simply make the cube object a child of the camera (by dragging it over the camera in the Hierarchy View). If the cube object is then placed in front of the camera, it will always stay in the same relative position, and thus it won't be clipped by Unity. (At least not in the game view.)

Changes for Opaque Screen Overlays
Many changes to the shader are conceivable, e.g. a different blend mode or a different depth to have a few objects of the 3D scene in front of the overlay. Here we will only look at opaque overlays.

An opaque screen overlay will occlude triangles of the scene. If the GPU was aware of this occlusion, it wouldn't have to rasterize these occluded triangles (e.g. by using deferred rendering or early depth tests). In order to make sure that the GPU has any chance to apply these optimizations, we have to render the screen overlay first, by setting

Also, we should avoid blending by removing the  instruction. With these changes, opaque screen overlays are likely to improve performance instead of costing rasterization performance.

Summary
Congratulation, you have reached the end of another tutorial. We have seen:
 * How to simulate GUI textures with a GLSL shader.
 * How to modify the shader for opaque screen overlays.