Title: Transformation
1 Part X Texture Mapping
2What is Texture Mapping?
- Texturing modifies the values used in the
lighting equation to diminish the shiny plastic
effects produced by the simple lighting equation. - The key step in texture mapping is surface
parameterization that maps between the texture
image and the objects surface to be textured.
texture image
local coordinates
3Surface Parameterization
- Surface parameterization is like wall-paper
wrapping, and so inevitably ad hoc. Imagine
wall-paper wrapping for a sphere. - Once the surface parameterization is done anyway,
the well-developed LC-to-WdC mapping is invoked.
At WdC, the texture colors are combined with the
colors computed through lighting and shading.
local coordinates
4Projector Function
- Surface parameterization is often described by
two successive functions projector function and
corresponder function. - For real-time rendering applications, the
projector function typically assigns a normalized
(s,t) value pair to each vertex of the mesh.
(336.6, 247.5)
(0.99, 0.99)
(-2.3, 7.1, 88.2)
(-9.3, 0.2, 15.9)
(0,0.99)
(0,247.5)
(1.5, -8.9, 34.0)
(0, 0)
(0, 0)
local coordinates
parameter space
texture space
projector function
corresponder function
249
1
texture space
parameter space
339
1
5Projector Function (contd)
(-2.3, 7.1, 88.2)
(0.99, 0.99)
(-9.3, 0.2, 15.9)
(0, 0.99)
(1.5, -8.9, 34.0)
(0, 0)
local coordinates
projection
parameter space
projector function
- The parameter-space values s and t are in the
range of 0,1). - Projector functions include spherical,
cylindrical, box and planar functions. The above
example is a planar projector function, which is
like a slide projector shining a transparency to
the box face. - In real-time renderers, projector functions are
usually applied at the modeling stage, and the
results are stored at the vertices. - Non-interactive renderers often call the
projector functions on the fly.
6Corresponder Function
(0.99, 0.99)
(336.6, 247.5)
(0, 0.99)
(0,247.5)
(0, 0)
(0, 0)
texture space
parameter space
corresponder function
- Given a texture image of resolution 340x250,
s0,1) corresponds to 0,339, and t0,1) to
0,249. So, the corresponder function may simply
multiply s by 340 and t by 250. For example,
0,0.99 ? 0,247.5. - Such texture-space values are interpolated for
shading. Its the same way as the lighting colors
are interpolated by Gouraud shading. - In a simple scenario, for each pixel to be
colored, we can drop the fractions of the
interpolated texture-space values (e.g. to get
0,247), and use the results as indices of the
image texture to retrieve a texel. - Also in a simple scenario, the texel color can
replace the lighting color.
7Corresponder Functions (contd)
- The parameter-space values (s,t) are not
necessarily in the range of 0,1). Corresponder
functions determine the behavior of (s,t) outside
the range 0,1). - wrap, repeat, or tile repeat the image across
the surfaces by dropping the integer part of the
parameter values, i.e. (s, t) (s-?s?, t-?t?).
So, the left/right edges and top/bottom edges
should match. - mirror mirrored on every other repetition is
good for providing some continuity along the
edges of the texture. - clamp values outside the range 0,1) are clamped
to those of the edges of the image texture. - border parameter values outside0,1) are
rendered with a separately defined border color.
8Texture Blending Operations Replace
- Recall that, in Gouraud shading, the lighting
equation is evaluated per vertex and the RGB
colors at vertices are interpolated at the
rasterizer stage. - The texel RGB values obtained in the texture
mapping process should interact with the colors
computed by the lighting equation replace,
modulate or decal. - In the replace mode, any lighting computed for
the surface is replaced by the texture. So, the
textures color always appears the same
regardless of changing light conditions. It would
be good when e.g. drawing a can with an opaque
label.
9Texture Blending Operations Modulate
- In the modulate mode, the lighting color is
multiplied by the texture color. - The modeler sets the (s,t) values at vertices.
- A white material is typically used in computing
the lighting at each vertex. - The computed lighting and texture-space values
are interpolated across the polygon. - At each pixel, the texel color is obtained and
modulated/multiplied by the lighting color.
10Texture Blending Operations Decaling
- Suppose you have a tree texture and do not want
its background to affect the scene. - Extend the RGB texture map into RGBA, and assign
an ? value of 0 to a texel to be transparent. - Its called decaling (or alpha mapping, in
general) which is often used for e.g. an insignia
on an airplane wing. - In general, decaling refers to drawing one image
atop another. - Decaling is different from replace and modulate.
(1-bit) ?-map
texture map
11Texture Blending Operations Summary
- The replace, modulate, and decal modes can be
described as follows - replace CfCt and AfAt
- modulate CfCtCl and AfAt Al
- decal Cf(1-At)ClAtCt and AfAl
- where the subscript f denotes final, t texture
color, and l lighting color. - In some systems, Af is often set to AtAl for
implementing the decal mode.
12Scan-line Algorithm and Interpolation
- The scan-line algorithm plays a key role at the
rasterizing stage. - Lighting colors, texture colors, and z values are
assigned at each vertex, and interpolated by the
scan-line algorithm.
8 7 6 5 4 3 2 1
y
x 1 2 3 4 5 6 7
13Magnification
- Consider the two polygons to be textured one is
smaller than the image and the other is bigger. - Magnification can be depicted as follows.
minification
magnification
pixel
texel grid
There are more pixels than texels!!
14Magnification
- Two common techniques for magnification are
nearest neighbor and bilinear interpolation. In
general, bilinear interpolation is better.
(?u0.5?, ?v0.5?)
nearest neighbor
bilinear interpolation
15Minification
- Minification can be depicted as follows.
- We can also use nearest neighbor or bilinear
interpolation, but these two may cause severe
aliasing problems.
minification
texel grid
pixel
There are less pixels than texels!!
Imagine a texture where x is black and all the
others are white.
What if a pixel is influenced by more than 4
texels?
no influence
bilinear interpolation
nearest neighbor
16Mipmapping
- Its the most popular method of antialiasing for
textures, where mip stands for multum in parvo
(many things in a small place). - Texture image size is restricted to 2m2n texels,
or sometimes even 2m2m square. - The texture is downsampled to a quarter of the
original area. Each new texel value is typically
computed as the average of the four neighbor
texels. Its a box filter. We can use Cone or
Gaussian filters. The reduction is performed
recursively until one or both of the dimensions
of the texture equals one texel.
level 1
level 2
level 0
Its a box filter!
a texel
17Mipmapping (contd)
- Consider the two perfect cases.
- If a pixel covers 2222 texels, go to level 2 and
get a texel. - If a pixel covers 2121 texels, go to level 1 and
get a texel. - In general, which level to go?
level 0
level 1
level 2
level 2 ?
a pixels center
a texel
level 1 ?
level 0
level 1
level 0 ?
d denotes the level of detail (LOD)
18Mipmapping (contd)
- We could use the longer edge of the
quadrilateral formed by the pixels cell to
compute d. (In fact, more popular is using
differentials.) - A pixels center is assigned a texture-space
value. - Lets approximate the pixels quadrilateral by
connecting the 4 adjacent pixels texture-space
values. - In the example, the longest edge is of length
about 4. Go to level dlog242. - As the pixel center normally does not coincide
with a texels center, we need bilinear
interpolation.
a pixels center
level 0
level 1
level 2
texel grid
4
a texel
19Mipmapping (contd)
- Note that d is not necessarily an integer. For
example, assume that d is 1.7. - Go to level 1, and do bilinear interpolation to
get v1. - Go to level 2, and do bilinear interpolation to
get v2. - Do linear interpolation between v1 and v2
0.3v10.7v2. - Its a tri-linear interpolation.
level 0
level 2
a pixels center
texel grid
level 2 ?
level 1 ?
level 1.7 ?
level 0 ?
a texel
20Problems of Mipmapping
- Suppose that a pixel cells quadrilateral covers
a large number of texels along one dimension but
only a few along the other dimension. - If the texture image of 64x64 texels is covered
by 32x32 pixels, d is 1. If 64X32 texles vs.
8x16 pixels, d is 3!!!! Such a case, like the
above example, leads to over-blurring. Its
OpenGL approach. - There are many techniques to tackle this problem
ripmap, summed-area table, etc. See some advanced
books.
level 0
level 1
level 2
a pixel cells quadrilateral
level 2 ?
level 1 ?
texel grid
level 0 ?
The pixel covers about 18 texels at level 0, but
actually takes all of 64 texels!!!
a texel
21Clipmap
- Consider flight simulation, where the image
datasets are huge. - When the viewer is flying above terrain, level 0
may be needed for a small portion of the image
which is closest to the viewer, level 1 for a
little farther portion, level 2 beyond that, etc.
level 2 ?
level 1 ?
level 1
level 0
level 2
level 3
level 0 ?
22Post-Texture Application of Specular Color
- Recall the lighting equation itot iamb
idiff ispec . - By default, texturing operations are applied
after lighting, but blending specular highlights
with a textures colors usually lessons the
effect of lighting. - It can be avoided by having the diffuse color
modulated by a texture, but the specular
highlight untouched. - Lighting computes two colors per vertex
- a primary color, consisting of all nonspecular
contributions, and - a secondary color, summing all specular
contributions. - Only the primary color is combined with the
texture colors, and then the secondary color is
added. - This can be done by multipass rendering, where
the various parts of the lighting equation are
evaluated in separate passes.
23Multitexturing
- Unlike multipass rendering, multitexturing allows
two or more textures to be accessed during a
single pass. - Mutitexturing consists of a series of texture
units, where each texture unit performs a single
texturing operation and successively passes its
results onto the next texture unit. - Each texture unit includes texture image,
filtering parameters, etc.
In actuality, all the textures are often combined
before blending with vertex colors!!
24Why Multitexturing?
- In multitexturing, N primary textures and M
secondary textures can be combined in NM ways,
but only NM, rather than NM, textures are
required in memory. - ABCD cannot be achieved by multipass rendering
alone since only one color can be stored in the
frame buffer. However, it can be achieved by
integrating multipass rendering and
multitexturing. - The multitexturing enables advanced rendering
techniques such as lighting effects, decals,
compositing, and detail textures. - The multitexturing can also help avoid the
allocation of an alpha channel in the frame
buffer.
25Light Mapping
- For static lighting in an environment, the
diffuse component on any surface remains the same
from any angle. - itot iamb d( idiff ispec )
- mamb ? samb d (nl) mdiff ? sdiff
(rv)mmspec ? sspec - Why dont we pre-compute a separate texture that
captures the diffuse component, and combine it
with the primary texture? - Its often called a static multitexture (if the
multitexturing is used).
movable
static
26Light Mapping (contd)
- Can Gouraud shading make the following without
light mapping? - Lighting mapping is often called dark mapping.
Think about why. - Its typically used on diffuse surfaces. So,
called diffuse light maps. - Advantages of light mapping in a separate stage
(either in multipass rendering or in
multitexturing) - The light texture can generally be low-resolution
as lighting changes slowly across a surface. - etc.
27Light Mapping Quake II Example
28Gloss Mapping
- Diffuse light mapping for the brick wall
example is cool. However, umm how can we make
only the bricks shiny (and the mortar non-shiny)? - We can use a monochrome (gray-scale) gloss
texture where - 1.0 means that the full specular component is to
be used, and - 0.0 means that no specular component is to be
used.
29Texture Animation
- The texture image need not be static. We can use
a video source. - Similarly, the texture coordinates need not be
static, either. We can change the texture
coordinates from frame to frame. For example,
waterfall modeling can be achieved by increasing
the t coordinates on each successive frame.
30Environment Mapping (Overview)
- Environment or reflection mapping is the process
of reflecting the surrounding environment in a
shiny object. Morphing cyborg in T2!! - A ray is fired from the viewer to a point, and
then reflected with respect to the normal at that
point. - The direction of the reflection vector is used as
an index to an environment image, called an
environment map. - Assumptions
- The objects and lights being reflected with EM
are far away. - The reflector will not reflect itself.
r 2( n v ) n - v
31Spherical Coordinates
- Blinn and Newells EM uses a spherical coordinate
(?,?) where ?0,? is the latitude and ?0,2?
is the longitude. - Consider the unit reflection vector (rx, ry, rz).
Then, cos ? -rz
z
z
e.g.
cos ? -rz ? arccos(-rz)
1
x
x
?
-rz
rz?3/2 ?150o
32Spherical Coordinates (contd)
- For now, suppose ry0, and consider the zx cross
section. - Suppose a fixed ? (e.g. 90o), and consider the xy
cross section. Then, rx sin? cos?. If ry lt 0,
rx sin? cos(2?-?). So, we can get ?.
z
rx
1
x
?
sin ? rx rx sin ?
y
y
y
y
x
x
x
x
? 0o
?90o
?45o
?135o
33Spherical Coordinates (contd)
- The computed spherical coordinates (?,?) are
transformed to the range 0,1) and used as (s,t)
coordinates to access the environment texture. - A problem
- We need per-pixel computation, which is not
compatible with Gouraud shading and might not be
feasible for real-time graphics. - The solution would be to compute the spherical
coordinates at the vertices, and then interpolate
these coordinates across the triangles. - Special cares are needed around the poles and the
vertical seam. For example, consider the
following.
u0.99
Make it 1.02, and then repeat.
u0.02
u0.97
34Cubic Environment Mapping
- Imagine a cube at the center of which the camera
resides. Project the environment onto the six
sides of the cube. - Unlike Blinn and Newells EM which needs a
spherical projection, cubic EM is easy to
generate. - What if two vertices are found to be on different
- cube faces?
- A solution is to make the environment map faces
- larger than 90 in view angle.
35Bump Mapping
- Lets make a surface appear bumpy.
- We could achieve it through complex modeling, but
lets simply modify the surface normal, used in
the lighting equation. - We wont modify the mesh itself, but temporarily
change normals when computing the lighting
equation. For that purpose, we need a bump
texture map, which directs how to perturb the
normals.
itot iamb d( idiff ispec ) mamb ?
samb d (nl) mdiff ? sdiff (rv)mmspec ?
sspec
n
r
l
?
v
36Theoretical Foundations for Bump Mapping
- Consider a curve represented by a parameterized
function P(u,v). - We also have a function B(u,v), which is a
height/bump map. - Lets displace the point P at (u,v) in the
direction of N by an amount specified by B at
(u,v) P?(u,v)P(u,v)B(u,v)N. - The normal at the new point P? is defined as
that in P - N? P?u x P?v where P?u Pu BuN BNu and
P?v Pv BvN BNv. - Usually B is small enough to ignore P?u Pu
BuN and P?v Pv BvN. - Then, N? P?u x P?v PuxPv PuxBvN BuNxPv
BuNxBvN
P(u,v)(x,y,z)(f(u,v),g(u,v),h(u,v))
P(1,1)
NPuxPv
N
Pv
Pv
P(0,1)
Pu
Pu
?
P(1,0)
P
?
v
N?
u
P(0,0)
0
Bv?
Bu?
372D Bump Mapping Illustration
38Bump Mapping through Height Field
- The bump map B(u,v) can be replaced by a discrete
height field, which is in fact a monochrome image
where, for example, 255 denotes the highest point
and 0 the lowest one. - The height field can be used to compute Bu and
Bv. - Take the differences between neighboring columns
to get Bu. - Take the differences between neighboring rows to
get Bv. - HW7
- Precisely, how??
- What are ? and ? ?
25
204
a pixel whose normal should be perturbed
178
78
39Bump Mapping through Height Field - Examples
40Real-time Implementation of Bump Mapping
- Note that the classical bump mapping requires
normal variation per pixel, which might be hard
to achieve in real-time. - In order to achieve real-time bump mapping, store
the actual new normals as (x,y,z) vectors in a
normal map. - The l vector from each vertex to the light source
is interpolated across the surface, and then
dot-producted with the normals of the normal map.
idiff d(nl) mdiff ? sdiff
n
l
413D Texture
- Imagine carving an object out of a wood. Then,
the objects surface should have wood grain
texture. - In the 3D texture, texture values exist
everywhere in the object domain. The color of the
object is determined by the intersection of its
surface with the 3D texture field. - Note that wood grain can be simulated by a set of
concentric cylinders. Also note that we can
depict the cross section of the wood grain by
alternately drawing some region with a radius
range by a color and its neighboring region by a
different color.
cross section ?
radius(x10)
0 1 2 3 4 5 6 7 8 9 10
423D Texture (contd)
- The 3D wood grain field can be procedurally
defined as follows where (u.v.w) describes a 3D
texture space. - rgb wood_grain(u,v,w)
- radius ?(u2v2)
- grain round(radius) mod 20
- if grain lt 10
- then return light rgb
- else return dark rgb
-
- Lets perturb the radius with a sinusoidal
function.
v
u
w
radius(x10)
0 1 2 3 4 5 6 7 8 9 10
e.g. a24
radius ?(u2v2) sin(a?) where a is a
constant and ? tan-1(u/w)
u
u
?
w
w
433D Texture (contd)
- The pseudo code for wood grain is as follows.
- rgb wood_grain(u,v,w)
- radius ?(u2v2)
- if w0
- then ? ?/2
- else ? tan-1(u/w)
- radius radius sin(24?)
- grain round(radius) mod 20
- if grain lt 10
- then return light rgb
- else return dark rgb
-
another more complex example