Advanced Texture-Mapping
- textures are 2D arrays of 3D or 4D floats
- that is a LOT of data
- we can put any data in each colour channel, and extract it out
- getting it out can be a bit tricky, because it needs to be relative to each vertex or fragment
5 Dec 2011
Dr Anton Gerdelan apg@bth.se
Single vs. Multi-Pass Rendering
- normally we compute all lighting at once at apply to a single texture
- but we can evaluate some parts of the lighting and texturing separately
- by rendering each result to a texture, we can layer separate 'passes' together and combine the result
- because: graphics hardware has an upper limit to the number of textures it can store at one time
- and: some techniques may lend themselves to operate on the entire screen (post processing filters - compositors)
- useful for:
- Motion blur, depth-of-field (camera focus), anti-aliasing, soft shadows, planar reflections
- the Effects framework (D3D) allows you to write passes into your shaders/materials, as do higher-level rendering engines (OGRE)
- otherwise we can write our own render to texture (RTT) code
example of multi-texturing
- We would like to modulate (multiply) the diffuse color by a texture
- We want specular highlight to be unmodified
- Pass 1: Compute and interpolate diffuse illumination contribution and
modulate it by the texture
- Pass 2: Compute and interpolate the specular part and render scene
again. Add this result to existing diffusely lit, textured image.
in a single pass
In a single pass the texture modifies the surfaces by operations:
- Replace
Replace the original surface color with the texture color
Removes any lighting computed for the surface
- Decal
Like replace
If alpha is available then blend the texture color with the underlying color
-
Modulate
Multiply the surface color by the texture color
The shaded color is modified by the color texture giving a shaded textured
surface
Multi-Texturing: Decals
- layer(s) of extra detail (that can be switched)
- pre-baked lighting and shadows
method
- load 2 images to hardware
- map each to a texture variable in a shader
- create a single 2D float UV for each vertex
- map postions, normals, and uv coords to shaders
- fragment shader mixes using alpha components
|
|
Decals, forts.
GLSL Fragment Shader
#version 420
in vec2 texCoord;
uniform sampler brickTex;
uniform sampler splatterTex;
layout (location = 0) out vec4 fragColour;
void main() {
// (work out ADS lighting here...)
vec4 brickTexColour = texture(brickTex, texCoord);
vec4 splatterTexColour = texture(splatterTex, texCoord);
vec3 texColour = mix(brickTexColour, splatterTexColour, splatterTexColour.a);
fragColour = (ambient + diffuse) * texColour + specular;
}
Blend Maps ("Terrain Splatting")
- for every 3 input images
- create a splatting map (1px = 1 terrain vertex)
- 1 colour channel for each input image
- tile textures over terrain using weighted average of each texture for each fragment (i,j)

(hint: the mix() function in GLSL does a weighted average)
- BUT we have two sets of texture coordinates (UV values); big splatting map, and small tiling textures.
divide by width and depth of terrain to get tiled texture UVs:
Pixel Shader Example (HLSL)
Blend Maps, result
- easy to make a terrain "painting" editor that blends textures together nicely
- (use ray casting to 'pick' a triangle, then click mouse to 'paint' nearby vertices with a selected channel)
- when finished, create the splatting coverage image file
- store large, interesting texture without huge memory use
Texturing Animation
- Useful for "cheap" animations; water flows, conveyor belts
- every frame: change U or V values by some fixed speed * time (maybe use a texture matrix)
- very easy to add to your shader
- combine several animated layers (see "decals") at different speeds for nice effects (clouds, water)
Environment Mapping
- another approximate model of reflection. Blinn, 1976.
- many different reflection mapping techniques exist (see book). Sphere-maps were first, but concept, computation, and art are more complicated, so cube maps (Ned Greene, "Environment Mapping", 1986) dominated.
- not accurate for reflecting small spaces
- do not normally reflect moving objects (you can do though)
- usually use images to match a "sky box" in the same scene
- use the weighted average with another colour, e.g. mix() function to 'tint' the rendered result
- can look nice when combined with other effects, e.g. to add further shininess to a specular car paint, or a semi-transparent window
- can also be extended to simulate refraction...
Cube Map Design

Requires 6 square images of the same size.
Can be made from 6 photographs taken at 90 degree angles.
Usually made by an artist. Some modelling software will let you export a cube map.
Usually used with outdoor scenes, but can also be used for "detailed specular highlights" (4 squares of light from a window).
check this out:
http://brainwagon.org/images/escher.env.jpg
image source: anim8or manual, www.anim8or.com
Cube Map UV Calculation
- Get incident ray from camera to surface (vertex) point
- Reflect ray on other side of normal (angle of reflection = angle of incidence)
- Because our cube has 6 faces (+x, -x, +y, -y, +z, -z)
we know that the biggest of our ray's xyz components tells us which face will be hit first.
- we now have only a 2D problem. divide other 2 components by largest component to get U and V values.
|
|
e.g. our (
normalised) ray has xyz of (0.80,0.27,0.53).
biggest component is 0.80, (+x), so we know that we will hit the right face of the cube.
our U,V values are 0.27/0.80 and 0.53/0.80
.:UV = (0.34,0.66)
but actually these are in the range of -1 to 1 so we convert to range 0 to 1:
UV = ((1 + 0.34) / 2, (1 + 0.66 / 2)) =
(0.67, 0.83)
Cube Map: Example, CPU set-up
Cube-maps are a common effect, so there are presets in OpenGL and Direct3D for creating and sampling them. Here we create 1 texture variable in OpenGL, but set the type as "GL_TEXTURE_CUBE_MAP". OpenGL then expects us to give it 6 different loaded images, which we load in the normal way. The sampler function will treat our reflected vector as a texture coordinate with 3 components (S, R, and T), so it expects us to give it 3 wrapping options. We will use 'clamp' to avoid any accidental tiling.
// enable texture unit 0 (everything after here is part of texture unit 0)
glActiveTexture(GL_TEXTURE0);
// create 1 new texture variable of type GL_TEXTURE_CUBE_MAP
GLuint texID;
glGenTextures(1, &texID); // create new texture variable
glBindTexture(GL_TEXTURE_CUBE_MAP, texID); // enable new texture variable
// keywords for each texture. in an array so we can use them in a loop
GLuint targets[] = {
GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X,
GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y,
GL_TEXTURE_CUBE_MAP_POSITIVE_Z, GL_TEXTURE_CUBE_MAP_NEGATIVE_Z
};
// load in all 6 images
for( int i = 0; i < 6; i++ ) {
// --insert here-- load each image as normal
// copy each image to hardware
glTexImage2D(targets[i], 0, GL_RGBA, imageData->w, imageData->h,
0, GL_RGBA, GL_UNSIGNED_BYTE, imageData->data);
}
// define filters. note that GL needs S,T, AND R filters
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
Cube Map: Example Vertex Shader
Here we work out the reflection vector. The camera position is in
world coordinates so we raise everything else from
local coordinates to world coordinates in this shader before doing the calculation. This means sending in the model matrix as a separate matrix.
// A GLSL vertex shader for rendering a cube-map
#version 400
layout (location = 0) in vec3 VertexPosition;
layout (location = 1) in vec3 VertexNormal;
layout (location = 2) in vec2 VertexTexCoord; // UV
out vec3 ReflectDir; // reflection vector
uniform vec3 WorldCameraPosition; // we need the camera position
uniform mat4 ModelViewMatrix; // view matrix * model matrix
uniform mat4 ModelMatrix; // send me in separately!
uniform mat3 NormalMatrix; // glm::inversetranspose(view * model)
uniform mat4 ProjectionMatrix; // found in the resize callback
uniform mat4 MVP; // projection * view * model
void main() {
// convert vertex position to world coordinates
vec3 worldPos = vec3(ModelMatrix * vec4(VertexPosition,1.0));
// convert sufrace normal to world coordinates
vec3 worldNorm = vec3(ModelMatrix * vec4(VertexNormal, 0.0));
// get vector from camera to vertex
vec3 incident = normalize(worldPos - WorldCameraPosition);
// get reflection out the other side of the surface normal
ReflectDir = reflect(worldView, worldNorm);
// also output the position in clip space
gl_Position = MVP * vec4(VertexPosition,1.0);
}
Cube Map: Example Fragment Shader
Here the sampler function "texture()" takes our reflection xyz vector and does the conversion to uv coordinates (as in the slide) for us. We then compute a weighted average of our original surface colour, and the colour from the cubemap texture.
// GLSL fragment shader for rendering using a cube map
#version 400
in vec3 ReflectDir; // input from vertex shader
uniform samplerCube CubeMapTex; // the cube map texture created on CPU
uniform float ReflectFactor; // between 0 and 1 (fully reflective)
uniform vec4 MaterialColor; // original texture (just a colour here, but we could use a texture)
layout( location = 0 ) out vec4 FragColor; // final output colour
void main() {
// Access the cube map texture. the sampler does the calculation from xyz vector to uv coordinate
vec4 cubeMapColor = texture(CubeMapTex, ReflectDir);
// output is a weighted average of original and cube colour
FragColor = mix(MaterialColor, cubeMapColor, ReflectFactor);
}