Title: KIPA Game Engine Seminars
1KIPA Game Engine Seminars
Day 6
- Jonathan Blow
- Ajou University
- December 2, 2002
2Level-of-Detail MethodOverview
- Traditional Purpose Speed Boost
- Ideal Render a fixed number of triangles always
- Doesnt matter how far your view stretches into
the distance - Diagram of pixel tesselation
- Object detail / triangle count as a function of
distance
3Future PurposeGeometric Antialiasing
- Discussion of scenes with many small objects far
away - In a rendering paradigm like MCRT we get a
certain amount of antialiasing for free - When projecting geometry onto the screen, we do
not we need to implement something that provides
antialiasing for us
4Level-of-Detail Methods
- Static mesh switching
- Progressive mesh
- Continuous-LOD mesh
- Issues involving big objects (static and
progressive mesh not good enough?)
5Static mesh switching
- Pre-generate a series of meshes decreasing in
detail - Switch between them based on z distance of the
mesh from the camera - Perhaps be more analytical and switch based on
max. projected pixel error? - Nobody actually does this because it is far too
conservative
6Progressive Mesh
- Generate one sequence of collapses that takes you
from high-res to 1 triangle - Dynamically select number of triangles at runtime
- Works well with modern 3D hardware since you only
modify a little bit of the index buffer at a
time.
7Progressive MeshDisadvantages
- Relies on frame coherence (bad!)
- Interferes with triangle stripping and vertex
cache sorting (they become mutually impossible). - High code complexity, and it makes everything
else more complicated, and adds restrictions to
everything else - Example of normal map generation restricted to
object space
8Continuous Level-of-DetailAlgorithms
- Lindstrom-Koller, ROAM, Rottger quadtree
algorithm - Dynamically update tessellation based on estimate
of screen-space error - Crack fixing between adjacent blocks, etc
9Continuous LOD
- Example of binary triangle trees
- There are other formats (quadtree, diamond, etc)
but the ideas are similar
10Continuous LODDisadvantages
- Extremely complicated implementations
- Slow on modern hardware
- Extreme reliance on frame coherence (bad!)
- Not conducive to unified rendering (hard to make
work on curved surfaces, arbitrary topologies)
11Continuous LOD
- Has a lot of hype in the amateur and academic
communities - Is currently not competitive with other LOD
approaches - This is not likely to change any time soon
12LOD Metrics
13Introduction
- We need an effective way to benchmark / judge LOD
schemes - The academic world is not really doing this right
now! - We need a standard set of data with comparable
results - University of Waterloo Brag Zone for image
compression
14LOD Metric?
- We often create metrics for taking each small
step in a geometric reduction - We dont have a metric for comparing a fully
reduced mesh with the source model or another
reduced mesh - Because our mesh representations are so ad hoc
15Image Compression guyshave a metric
- (even though they know its not that good)
- PSNR measures difference between compressed image
and original - They know it has problems (not perceptually
driven) and are working on a better metric - But at least they have a way of comparing
results, which means they are sort of doing
science!
16Metric ideas
- Sum of closest-point distances
- Continuous, which is good
- Very expensive to compute
- Non-monotonic (!), which is bad
- Monotonic for small changes, usually, which might
be good enough - Ignores texture warping, which is bad
- Unless we try it in 5-dimensional space
- Ignores vertex placement
- Important for rasterization (iterated vertex
properties!) - Example of big flat area
- Ignores cracks in destination model
17Lindstrom/Turk screenspaceLOD comparison
- Guide compression of a mesh by taking snapshots
of it from many different viewpoints and PSNRing
the images - This can work but PSNR is not necessarily stable
with respect to small image-space motions
18Lindstrom/Turk screenspaceLOD comparison
- (Talking about paper, showing figures from it)
19The Fundamental Problem
- Our rendering methods are totally ad-hoc we have
3 different things - Vertices
- Topology
- Texture
- A metric that uniformly integrates these things
is very difficult.
20Complexity of metric
- The more complicated a metric is, the more
difficult it is to program correctly, ensure we
are using it correctly - That our simplest possible metric should be
something so complicated that is a bad sign.
21Compare with Voxels
- Voxel geometry representations can basically use
something like PSNR directly no need for
complicated metrics - Lightfields can also (though its a little harder)
22Digital Geometry Processing
- Work by Peter Schroeder at Caltech, and many
others - Attempts to develop DSP-like ideas for geometry
manipulation - Heavy use of subdivision surfaces
23(Overview of subdivision surfaces)
24How DGP works
- Apply a scaled filter kernel to the neighborhood
of a vertex - Like wavelet image analysis in its multiscale
aspects - But unlike wavelets/DSP in that the
inputs/outputs are not homogeneous - What exactly is the high-pass residual after a
low-pass filter? - This is because of that whole topology-different-f
rom-vertices thing
25Actual effective DGP would be ?
- I dont know. (Its a hard problem!)
- Spherical harmonics would work, for shapes
representable as functions over the sphere
26Solutions/Details
27What I Use
- Garland/Heckbert Error Quadric Simplification
- Static Mesh Switching
- I want to do a unified renderer this way
(characters, terrain, big airplanes, whatever) - People seem to think crack fixing is hard but it
is actually easy - Maybe thats why people havent tried this yet?
28Discussion ofGarland/Heckbert Algorithm
29Garland/Heckbert References
- Surface Simplification Using Quadric Error
Metrics - Simplifying Surfaces with Color and Texture
using Quadric Error Metrics
30G/H is useful also if you are making progressive
meshes
- It just tells you how to collapse the mesh
doesnt dictate how you will use that information.
31Review of GH AlgorithmIn code