OpenGL Programming/Intermediate/Textures

Types
There are several different types of textures that can be used with OpenGL:
 * GL_TEXTURE_1D: This is a one dimensional texture. (Requires OpenGL 1.0)
 * GL_TEXTURE_2D: This is a two dimensional texture (it has both a width and a height). (Requires OpenGL 1.0)
 * GL_TEXTURE_3D: This is a three dimensional texture (has a width, height and a depth). (Requires OpenGL 1.2)
 * GL_TEXTURE_CUBE_MAP: Cube maps are similar to 2D textures but generally store six images inside the texture. Special texture mapping is used to map these images onto a virtual sphere. (Requires OpenGL 1.3)
 * GL_TEXTURE_RECTANGLE_ARB: this texture format is much like the two dimensional texture but supports non power of two sized textures (Requires

Basic usage
GLuint theTexture; glGenTextures(1, &theTexture); glBindTexture(GL_TEXTURE_2D, theTexture); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_DECAL); // ... glTexImage2D(...); // draw stuff here glDeleteTextures(1, &theTexture);

When you set up a new texture, you will usually have a pixel array in memory, as well as the dimensions of the image. (Image libraries or OpenGL wrappers might provide you with routines to read various graphic formats and directly make textures from them, but in the end it is all the same.)

For that, you first tell OpenGL to give you a new texture "template", which you then select to work with it afterwards. You set various parameters, like how the texture is drawn. It is possible to make it that, for textures with alpha channel, the objects behind it are drawn, but it is incompatible with the depth buffer (it does not know the texture is translucent, marks the object in the front as solid and will not draw the objects behind). You then need to sort the objects by their distance to the camera by yourself and draw them in that order to get proper results. This is not handled here, by the way.

Finally you give OpenGL the pixel array and it will load the texture into memory.

Let's write a function that allows us to read the pixel array from a file, specifying the dimensions as parameters:

The function eats image files that contain all pixel values in the order R, G, B, A. Such can be created using GIMP, for example: Make the image, merge all layers (necessary, because the RAW export module is broken), give it an alpha channel, save it and select RAW in the list of file types (at the bottom). (A recent version, 2.3 or up may be needed, I had problems with 2.2).

It returns you an OpenGL texture ID which you can select with glBindTexture(GL_TEXTURE_2D, texture) (passing the ID for texture).

Now that we loaded our texture, see how we can use it:

Example
Wow, that was a lot of code. Let's try to understand it, starting with main:

The first lines set up GLUT and are not worth being explained further.

init sets up the canvas and the light and loads our texture into a global variable. Since we only have one texture here, it's useless, as we don't need to switch textures, but it's here anyway. (You switch textures using glBindTexture, see above.)

Then some callbacks are set up, also not worth being talked about, except the display function. First the camera is set up (if you would change the perspective during simulation, you'd make use of this - in this case it's redundant). The canvas is cleared then (you should also know that).

Now we switch texture mode on, which means everything we do now will be textured. The currently selected texture will be used, which is in this case the one we created before (it is still selected since it was first selected during setup). Then a square is created on the XY axis. Note the glTexCoord2d calls: They define which part of the texture is to be assigned to the next vertex. We will make more use of it in another example.

Eh, and then it's drawn. It didn't really hurt, did it?



You want to place a RAW image called texture.raw in the working directory, RGBA 256x256. Such files can be created with some graphics editors, including GIMP.

A simple libpng example
This c++ code snippet is an example of loading a png image file into an OpenGL texture object. It requires libpng and OpenGL to work. To compile with gcc, link png glu32 and opengl32. Most of this is taken right out of the libpng manual. There is no checking or conversion of the png format to the OpenGL texture format. This just gives the basic idea.

Specifying texture coordinates
Texture coordinates assign certain points in the texture to vertices. It allows you to save texture memory and can make work a bit easier. For example, a dice has six different sides. You can put every side next to each other in a texture and say "this side should get the leftmost third of my texture".

This makes even more sense when modeling complex objects like humans, where you have the body, legs, arms, head, ... if you would use extra texture files for every part, you quickly end up with a lot of texture files which are terribly inefficient to manage. You better put all parts in one file and select the appropiate part when drawing, performs much better.

You saw texture coordinates already in the previous example, although the selected part was the whole texture. The two parameters given to glTexCoord2d are numbers ranging from 0 to 1. (This has the advantage of being size-independent - you can later decide to use a higher-resolution texture and do not need to change the rendering code!)

Tips

 * Switching textures is inefficient. Try to draw all objects with texture A first, then switch textures and draw everything with texture B, and so on. (If you have alpha textures, this can't be accomplished always, as you then must order all objects by yourself. An article about this might follow soon.)
 * Try to combine small textures into one large and select the part you want with texture coordinates. This reduces the memory overhead and also reduces the number of texture switches.