Rendering pipeline - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Rendering pipeline

Description:

Rendering pipeline. avd November 5, 2002 VSD 5/45. avd November 9, 2004 VSD 6/48. Conservative ... Conservative. VSD. View Volume. Clipping. Image- precision ... – PowerPoint PPT presentation

Number of Views:128
Avg rating:3.0/5.0
Slides: 17
Provided by: Chan232
Category:

less

Transcript and Presenter's Notes

Title: Rendering pipeline


1
Rendering pipeline
Viewing Geometry Processing World Coordinates
(floating point)
Rendering Pixel Processing Screen Coordinates
(integer)
Conservative VSD Selective traversal of object
database (or traverse scene graph to get CTM)
Transform Vertices To canonical view
volume
Light at Vertices Calculate light intensity at
vertices (lighting model of choice)
Conservative VSD Back-face Culling
Conservative VSD View Volume Clipping
Image- precision VSD Compare pixel
depth (Z-buffer)
Shading Interpolate color values (Gouraud) or
normals (Phong)
per polygon
per pixel of polygon
avd November 9,
2004 VSD 6/48
avd November 5,
2002 VSD 5/45
2
Two Corrections!
  • Projection equations in the Assignment problem
    corresponds to COP at z and VP at XY
  • Unit normal interpolation has to be normalized to
    make it unit this is why Phong is expensive!

3
Rasterization will convert all objects to pixels
in the image. But we need to make sure we dont
draw occluded objects. For each pixel in the
viewport, what is the nearest object in the
scene? (provided the object isnt
transparent) Thus, we need to determine the
visible surfaces.
4
Definition Given a set of 3-D objects and a view
specification (camera), determine which lines or
surfaces of the object are visible Also called
Hidden Surface Removal (HSR)
canonical house
canonical house
5
VSD algorithms
  • We can broadly classify VSD algorithms according
    to whether they deal with object definitions or
    with their projected images. Former is called
    object-space methods and latter called
    image-space methods.
  • Object-space compares objects and parts of
    objects to each other within the scene definition
    to determine which surfaces, as a whole, we
    should label as visible.
  • Image-space Visibility is decided point by point
    at each pixel position on the projection plane.
  • (Most algorithms studied are Image-space)

6
Back-Face Culling
  • Line of Sight Interpretation
  • Approach assumes objs defined as closed
    polyhedra, w/eye pt always outside of them
  • Use outward normal (ON) of polygon to test for
    rejection
  • LOS Line of Sight, the projector from the
    center of projection (COP) to any point P on the
    polygon. (For parallel projections LOS DOP
    direction of projection)
  • If normal is facing in same direction as LOS,
    its a back face
  • if LOS ON gt 0, then polygon is invisible
    discard
  • if LOS ON lt 0, then polygon may be visible
  • To render one lone polyhedron, you only need
    back-face culling as VSD.

7
Algorithm Sort objects back to front Loop over
objects rasterize current object write pixels
Painters algorithm Draw each object in depth
order - from back to front - near objects
overwrite far objects. -Create drawing order,
each poly overwriting the previous ones, that
guarantees correct visibility at any pixel
resolution -Strategy is to work back to front
find a way to sort polygons by depth (z), then
draw them in that order. -do a rough sort of
the polygons by the largest (farthest)
z-coordinate in each poly -scan-convert the most
distant polygon first, then work forward towards
the viewpoint (painters algorithm) -We can
either do a complete sort and then scan-convert,
or we can paint as we go.
8
Depth Buffer Method
  • A commonly used image-space approach for VSD.
  • Compares surface depth values throughout the
    scene for each pixel position on the projection
    plane.
  • Each surface of a scene is processed separately,
    one pixel position at a time across the surface.
  • Applied usually to scenes containing polygon
    surfaces.
  • Implementation of the depth-buffer algorithm is
    typically carried out in normalized coordinates,
    so that depth values range from 0 at the near
    clipping plane to 1.0 at the far clipping plane.
    (window-to-viewport mapping is then done and
    lighting calculated for each pixel)
  • Also called z-buffer method.

9
  • The Z-buffer algorithm
  • Z-buffer is initialized to background value
    (furthest plane of view volume 1.0)
  • As each object is traversed, z-values of all its
    sample points are compared to z-value in same (x,
    y) location in Z-buffer
  • z could be determined by plugging x and y into
    the plane equation for polygon (Ax By Cz D
    0)
  • in reality, calculate z at vertices and
    interpolate rest
  • If new point has z value less than previous one
    (i.e., closer to eye), its z-value is placed in
    z-buffer and its color placed in frame buffer at
    same (x, y) otherwise previous z-value and frame
    buffer color are unchanged
  • Can store depth as integers or floats or fixed
    points
  • i.e.for 8-bit (1 byte) integer z-buffer, set 0.0
    -gt0 and
  • 1.0 -gt255
  • each representation has its advantages in terms
    of precision
  • Doesnt handle transparencies well.
  • z-buffers typically use integer depth values

10
  • Requires two buffers
  • Intensity Buffer our familiar RGB pixel buffer
    (initialized to background color
  • Depth (Z) Buffer depth of scene at each pixel
  • initialized to far depth 255
  • Polygons are scan-converted in arbitrary order.
    When pixels overlap, use Z-buffer to decide which
    polygon gets that pixel





Above example using integer Z-buffer with near
0, far 255
11
  • draw every polygon that we cant reject trivially
  • If we find a piece (one or more pixels) of a
    polygon that is closer to the front, we paint
    over whatever was behind it
  • void zBuffer()
  • int x, y
  • for ( y 0 y lt YMAX y)
  • for ( x 0 x lt XMAX x)
  • WritePixel (x, y, BACKGROUND_VALUE)
  • WriteZ (x, y, 1)
  • for each polygon
  • for each pixel in polygons projection
  • double pz polygons Z-value at pixel (x, y)
  • if ( pz lt ReadZ (x, y) )
  • / New point is closer to front of view /
  • WritePixel (x, y, polygons color at pixel
    (x, y))
  • WriteZ (x, y, pz)

12
  • Once we have za and zb for each edge, can
    incrementally calculate zp as we scan

13
  • Simplicity lends itself well to hardware
    implementations FAST
  • used by all graphics cards
  • Polygons do not have to be compared in any
    particular order no presorting in z necessary.
  • Only consider one polygon at a time
  • brute force, but it is fast!
  • Z-buffer can be stored w/ an image allows you to
    correctly composite multiple images (easy!) w/o
    having to merge models (hard!)
  • great for incremental addition to a complex scene
  • Can be used for non-polygonal surfaces, CSGs, and
    any z f(x,y)
  • In some systems, user can provide region to
    z-buffer, thus saving computation time.
  • Also, z-buffer can be performed for a small
    region and moved around to finish the entire
    viewport

14
A-Buffer
  • A drawback of the depth-buffer is that it
    identifies only one visible surface at each pixel
    position. i.e. it deals with only opaque
    surfaces.
  • For transparent surfaces, it is necessary to
    accumulate color values for more than one
    surface.
  • Depth buffer in A-buffer has each position
    reference linked to a list of surfaces.
  • This allows a pixel color to be computed as a
    combination of different surface colors for
    transparency and anti-aliasing effects.
  • Surface information in the A-buffer (Accumulation
    buffer) includes RGB, opacity, depth, percent of
    area coverage (used for anti-aliasing effects),
    rendering parameters such as color, etc.

15
Scan-Line Algorithm Z-buffer
  • (Wylie, Romney, Evans and Erdahl)
  • For each horizontal scan line
  • find all intersections with edges of all
    polygons
  • (ignore horizontal boundaries)
  • sort intersections by increasing X and store in
    Edge Table
  • for each intersection on scan-line do
  • if edge intersected is left edge then
    entering polygon
  • set in-code of polygon
  • determine if polygon is visible, and if so use
    its
  • color (from Polygon Table) up to next
    intersection
  • else edge is a right edge then
    leaving polygon
  • determine which polygon is visible to right of
    edge,
  • and use its color up to next intersection

Active Edge Table Contents
16
  • Ray Casting
  • -Ray casting is based on geometric optics, which
    trace the paths of light rays.
  • It is a special case of ray-tracing algorithms
    that trace multiple ray paths
  • to pick up global reflection and refraction
    contributions from multiple objects
  • In the scene.
  • -Consider the line of sight from a pixel position
    on the view plane through
  • the scene, we can determine which objects in the
    scene (if any) intersect.
  • -After calculating all ray-surface intersections,
    we identify the visible surface
  • as the one whose intersection point is closest to
    the pixel.
  • It works with any primitive we can write
    intersection tests for.
  • But it is slow
  • Can use it fpr shadows, refractive objects,
    reflections, etc.

Loop over every pixel (x,y) shoot ray from eye
through (x,y) intersect with all surfaces
find first intersection point write pixel
Write a Comment
User Comments (0)
About PowerShow.com