Title: Perceptually Guided Interactive Rendering
1Perceptually Guided Interactive Rendering
- David Luebke
- University of Virginia
2Always start with a demo
3MotivationPreaching To The Choir
- Interactive rendering of large-scale geometric
datasets is important - Scientific and medical visualization
- Architectural and industrial CAD
- Training (military and otherwise)
- Entertainment
4MotivationModel Size
- Incredibly, models are getting bigger as fast as
hardware is getting faster
5Big ModelsSubmarine Torpedo Room
Courtesy General Dynamics, Electric Boat Div.
6Big ModelsCoal-fired Power Plant
(Anonymous)
7Big ModelsPlant Ecosystem Simulation
- 16.7 million polygons (sort of)
Deussen et al Realistic Modeling of Plant
Ecosystems
8Big ModelsDouble Eagle Container Ship
Courtesy Newport News Shipbuilding
9Big ModelsThe Digital Michelangelo Project
- David56,230,343 polygons
- St. Matthew 372,422,615 polygons
Courtesy Digital Michelangelo Project
10Motivation Level of Detail
- Clearly, much of this geometry is redundant for a
given view - The basic idea simplify the model, reducing the
level of detail used for - Distant portions
- Small portions
- Otherwise unimportant portions
11Traditional Level of DetailIn A Nutshell
- Create levels of detail (LODs) of objects
249,924 polys
62,480 polys
7,809 polys
975 polys
Courtesy Jon Cohen
12Traditional Level of DetailIn A Nutshell
- Distant objects use coarser LODs
13The Big Question
- How should we evaluate and regulate the visual
fidelity of our simplifications?
14Regulating LOD
- LOD is often controlled by distance
Courtesy Martin Reddy
15Regulating LOD
16Measuring Fidelity
- Fidelity of a simplification to the original
model is often measured geometrically
METRO by Visual Computing Group, CNR-Pisa
17Measuring Visual Fidelity
- However
- The most important measure of fidelity is usually
not geometric but perceptual does the
simplification look like the original? - Therefore
- We are developing a principled framework for LOD
in interactive rendering, based on perceptual
measures of visual fidelity
18Perceptually Guided LOD
- Several interesting offshoots
- Imperceptible simplification
- How to guarantee simplification is undetectable?
- Best-effort simplification
- How best to spend a limited time/polygon budget?
- Silhouette preservation
- Silhouettes are important. How important?
- Gaze-directed rendering
- When can we exploit reduced visual acuity?
19Related Work
- Lots of excellent research on perceptually guided
rendering - Bolin Meyer (SIGGRAPH 98)
- Ramasubramanian et al (SIGGRAPH 99)
- But all this work has focused on realistic
rendering algorithms (e.g., path tracing) - Different time frame!
- Seconds or minutes versus milliseconds
20Related Work
- As a result, prior work has incorporated quite
sophisticated perceptual metrics - Our goal a simple, conservative perceptual
metric fast enough to run thousands of times per
frame
21The Approach
- The contrast sensitivity function or CSF measures
perceptibility of visual stimuli - We test local simplification operations against a
model of the CSF to determine whether they would
be perceptible
22Perception 101The Contrast Sensitivity Function
- Perceptual scientists have long used contrast
gratings to measure limits of vision - Bars of sinusoidally varying intensity
- Can vary
- Contrast
- Spatial frequency
- Eccentricity
- Velocity
- Etc
23Perception 101 The Contrast Sensitivity Function
- Contrast grating tests produce a contrast
sensitivity function - Threshold contrastvs. spatial frequency
- CSF predicts the minimum detectablestatic
stimuli
24Your Personal CSF
Campbell-Robson Chart by Izumi Ohzawa
25Contrast Sensitivity FunctionAn Empirical Model
- The CSF is affected by many factors
- Background illumination, adaptation, age, etc
- Attentive focus
- We chose to sidestep these issues by building an
empirical model (lookup table) - User foveates on target, grating fades in
- Measuers threshold contrast across different
spatial frequencies, eccentricities
26Contrast Sensitivity FunctionComplex Waveforms
- The perceptibility of a complex signal is
determined by its harmonic components - If no frequency component of an image feature is
visible, the feature is imperceptible and may be
removed without visible effect - This is the key idea that will allow us to
simplify the model - Next need a framework for simplification
27Framework View-Dependent Simplification
- We use view-dependent simplification for LOD
management - Traditional LOD create several discrete LODs in
a preprocess, pick one at run time - Continuous LOD create data structure in
preprocess, extract desired LOD at run time - View-dependent LOD extract most appropriate LOD
for the given view
28View-Dependent LOD Examples
- Show nearby portions of object at higher
resolution than distant portions
View from eyepoint
Birds-eye view
29View-Dependent LOD Examples
- Show silhouette regions of object at higher
resolution than interior regions
30View-Dependent LOD Examples
- Show more detail where the user is looking than
in their peripheral vision
34,321 triangles
31View-Dependent LOD Examples
- Show more detail where the user is looking than
in their peripheral vision
11,726 triangles
32View-Dependent LODImplementation
- We use VDSlib, our public-domain library for
view-dependent simplification - Briefly, VDSlib uses a big data structure called
the vertex tree - Hierarchical clustering of model vertices
- Updated each frame for current simplification
33The Vertex Tree
- Each vertex tree node represents
- A subset of model vertices
- A representative vertex or proxy
- Folding a node collapses its vertices to the
proxy - Unfolding a node splits the proxy back into
vertices
34Vertex Tree Example
8
7
R
2
I
II
10
6
9
3
A
B
C
10
3
1
1
2
7
4
5
6
8
9
5
4
Vertex tree
Triangles in active list
35Vertex Tree Example
8
7
R
A
2
I
II
10
6
9
3
A
B
C
10
3
1
1
2
7
4
5
6
8
9
5
4
Vertex tree
Triangles in active list
36Vertex Tree Example
8
R
A
I
II
10
6
9
3
B
C
10
3
A
1
2
7
4
5
6
8
9
5
4
Vertex tree
Triangles in active list
37Vertex Tree Example
8
R
A
I
II
10
6
9
3
B
C
10
3
A
B
1
2
7
4
5
6
8
9
5
4
Vertex tree
Triangles in active list
38Vertex Tree Example
8
R
A
I
II
10
9
3
C
10
3
A
B
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
39Vertex Tree Example
8
R
A
C
I
II
10
9
3
C
10
3
A
B
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
40Vertex Tree Example
R
A
C
I
II
10
3
10
3
A
B
C
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
41Vertex Tree Example
R
A
II
C
I
II
10
3
10
3
A
B
C
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
42Vertex Tree Example
R
A
II
I
II
10
C
10
3
A
B
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
43Vertex Tree Example
R
A
I
II
I
II
10
C
10
3
A
B
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
44Vertex Tree Example
R
I
II
I
II
A
C
10
3
B
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
45Vertex Tree Example
R
I
II
I
II
R
A
C
10
3
B
B
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
46Vertex Tree Example
R
I
II
R
A
B
C
10
3
1
2
7
4
5
6
8
9
Vertex tree
Triangles in active list
47The Vertex TreeTris and SubTris
- Node folding is the fundamental simplification
operation - Some triangles change shape upon folding
- Some triangles disappear completely
8
7
8
Fold Node A
A
2
10
10
6
6
9
9
3
3
1
Unfold Node A
5
4
5
4
48Perceptually Guided LOD Key Contribution
- Our key contribution a way to evaluate the
perceptibility of a fold operation - Equate the effect of the fold to a worst-case
contrast grating - Find the worst-case contrast induced in the image
- Find the worst-case spatial frequency
49Perceptually Guided LOD Key Contribution
- Our key contribution a way to evaluate the
perceptibility of a fold operation - Equate the effect of the fold to a worst-case
contrast grating - Find the worst-case contrast induced in the image
- Bounded by the maximum change in luminance!
- Find the worst-case spatial frequency
50Perceptually Guided LOD Key Contribution
- Our key contribution a way to evaluate the
perceptibility of a fold operation - Equate the effect of the fold to a worst-case
contrast grating - Find the worst-case contrast induced in the image
- Bounded by the maximum change in luminance!
- Find the worst-case spatial frequency
- Bounded by the minimum spatial frequency (in our
case)
51Perceptually Guided LOD Key Contribution
- Our key contribution a way to evaluate the
perceptibility of a fold operation - Equate the effect of the fold to a worst-case
contrast grating - Find the worst-case contrast induced in the image
- Bounded by the maximum change in luminance!
- Find the worst-case spatial frequency
- Bounded by the minimum spatial frequency (in our
case) - bounded by greatest possible spatial extent!
52Worst-Case Contrast
Original
- Find maximum possible change in color
- Map to luminance, then to contrast
- This is the largest contrast that the fold could
possibly induce in the final image
Color Change
Simplified
53Worst-Case Contrast
Original
- Find maximum possible change in color
- Note depends on silhouette status!
- Map to luminance, then to contrast
- This is the largest contrast that the fold could
possibly induce in the final image
Color Change
Simplified
54Worst-Case Spatial Frequency
- Lower frequencies more perceptible
- At least, where we are concerned
- Can enforce this assumption
- Minimum spatial frequency determined by
projected screenspace extent of node
Size
?
Signal representing maximum change produced by
node simplification
55Bringing It All Together
- If simplifying a region is imperceptible, go
ahead and simplify!
Original
Simplified
56Imperceptible Simplification
- Imperceptible simplification only fold nodes
whose effect is predicted to be imperceptible - It works! Verified with simple user study
- Problem 1 overly conservative
- Problem 2 nobody cares
- Important result, important issues, but
- If you need imperceptible simplification that
badly, you probably wont simplify at all
57Imperceptible SimplificationResults
69,451 polygons
29,866 polygons
wireframe
- Here, the users gaze is 29 degrees from the
bunny - Silhouettes and strong details preserved
- Line of the haunch
- Shape of the ears
- But subtle (low-contrast) details removed
- E.g., top of the leg
58Best-Effort Simplification
- More pertinent best-effort simplification to a
budget - Idea order nodes to be folded based on the
distance at which you could perceive the fold - Nice, physical error metric
- After simplifying to (say) 50K tris, system can
report, this would be imperceptible from 8 feet.
59Best-Effort SimplificationResults
96,966 ? 18,000 faces Standard VDSlib error
metric (projected screenspace size)
96,966 ? 18,000 faces Perceptual error
metric (contrast spatial frequency)
60Silhouette Preservation
- Researchers
- Have long known silhouettes are important
- Have long used heuristics to preserve them
- Our model gives a principled basis for silhouette
preservation by accounting for the increased
contrast at silhouettes - Detect silhouette nodes using a quantized normal
cube - Set contrast to maximum for silhouette nodes
61Gaze-Directed RenderingEccentricity
- Visual acuity fallsoff rapidly in periphery
- Fovea central few degreesof vision
- 35-fold reduction from fovea ? periphery
- Eccentricity angular distance from center of
gaze
?
62Gaze-Directed RenderingEccentricity
- Can model the falloff of acuity with
eccentricity in CSF
Size
?
Eccentricity
63Gaze-Directed RenderingVelocity (Future Work!)
- Visual acuity also falls off for fast-moving
objects - Eye tracking object renderbackground at lower
resolution - Eye tracking background renderobject at lower
resolution - Very powerful in conjunction witheccentricity!
1 deg/s
20 deg/s
64Gaze-Directed RenderingVelocity (Future Work!)
- Can model the effect of retinal velocity on the
CSF
65Extending The FrameworkOther Rendering Paradigms
- This framework applies to almost any hierarchical
rendering technique - We have extended it to QSplat, the point-based
renderer of Rusinkiewicz and Levoy - Hierarchy of bounding spheres
- Used for simplification, culling, backface
rejection, and rendering - Heavily optimized for extremely large models
66Extending The FrameworkQSplat
- Promising results from QSplat prototype
QSplats highest quality2.9 million splats
Gaze-directed QSplat0.8 million splats (29o)
67Extending the FrameworkQSplat
QSplats highest quality simplified points in
blue
Gaze-directed QSplatusers eye on torch
68Summary
- Novel framework for interactive rendering
- Based directly on perceptual metric (CSF)
- Applied to polygonal simplification QSplat
- Addresses several interesting issues
- Imperceptible best-effort simplification
- Silhouette preservation
- Gaze-directed rendering
- Still in nascent form, but an important start
69Future Work
- Lots of opportunities for future research!
- Improve the current system
- Dynamic lighting using normal masks
- Address overly conservative contrast frequency
estimates using texture deviation metric (APS) - Extend the perceptual model, incorporating
- Retinal velocity
- Visual masking using texture content frequencies
- Temporal contrast (flicker) sensitivity
70Gaze-Directed RenderingApplicability
- Gaze-directed rendering clearly has limits
- Eye tracking not yet commodity technology
- But head tracking may turn out quite useful
- Gaze direction stays within 15o of head direction
- Video head tracking increasingly mature
- Wide-area FOV displays increasingly common
- Even with multiple viewers, may still get lots of
simplification in right environments.
71Acknowledgements
- Students
- Ben Hallen
- Keith Shepherd, Dale Newfield, Tom Banton
- Colleagues
- Martin Reddy
- Ben Watson
- Funding
- National Science Foundation
72The End
73AppendixReferences
- Perceptually guided offline rendering
- Bolin, Mark. and G. Meyer. A Perceptually Based
Adaptive Sampling Algorithm, Computer Graphics,
Vol. 32 (SIGGRAPH 98). - Ferdwada, James, S. Pattanaik, P. Shirley, and D.
Greenberg. A Model of Visual Masking for
Realistic Image Synthesis, Computer Graphics,
Vol. 30 (SIGGRAPH 96). - Ramasubramanian, Mahesh, S. Pattanaik, and D.
Greenberg. A Perceptually Based Physical Error
Metric for Realistic Image Synthesis, Computer
Graphics, Vol. 33 (SIGGRAPH 99).
74AppendixReferences
- Perceptually guided interactive rendering
- Reddy, Martin. Perceptually-Modulated Level of
Detail for Virtual Environments, Ph.D. thesis,
University of Edinburgh, 1997. - Scoggins, Randy, R. Machiraju, and R. Moorhead.
Enabling Level-of-Detail Matching for Exterior
Scene Synthesis, Proceedings of IEEE
Visualization 2000 (2000). - Gaze-directed rendering
- Funkhouser, Tom, and C. Sequin. Adaptive
display algorithm for interactive frame rates
during visualization of complex virtual
environments, Computer Graphics, Vol. 27
(SIGGRAPH 93). - Oshima, Toshikazu, H. Yamammoto, and H. Tamura.
Gaze-Directed Adaptive Rendering for Interacting
with Virtual Space, Proceedings of VRAIS 96
(1996).
75AppendixReferences
- View-dependent simplification
- Hoppe, Hughes. View-Dependent Refinement of
Progressive Meshes, Computer Graphics, Vol. 31
(SIGGRAPH 97). - Luebke, David, and C. Erikson. View-Dependent
Simplification of Arbitrary Polygonal
Environments, Computer Graphics, Vol. 31
(SIGGRAPH 97). - Xia, Julie and Amitabh Varshney. Dynamic
View-Dependent Simplification for Polygonal
Models, Visualization 96. - This research
- Hallen, Benjamin and David Luebke. Perceptually
Guided Interactive Rendering, UVA tech report
CS-2001-01. See http//www.cs.virginia.edu/luebk
e/temp/tech.report.pdf - VDSlib (software library) http//vdslib.virginia.
edu