OpenGL Frame Buffers - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

OpenGL Frame Buffers

Description:

Using this approach, two frame buffers need to exist. One is the color buffer, which ... Using normal OpenGL depth buffering, to do transparency correctly, the programmer needs ... – PowerPoint PPT presentation

Number of Views:193
Avg rating:3.0/5.0
Slides: 20
Provided by: soonte
Category:

less

Transcript and Presenter's Notes

Title: OpenGL Frame Buffers


1
OpenGL Frame Buffers
  • Soon Tee Teoh
  • CS 116A

2
OpenGL Frame Buffers
  • A frame buffer is part of the graphics memory.
  • There is a one-to-one mapping from each segment
    of a frame buffer to each pixel in the display.
  • OpenGL has several frame buffers.
  • Some frame buffers we have encountered before
    are color buffer and depth buffer.
  • In a color buffer, each segment contains the
    color information of the pixel it represents.
  • Actually, OpenGL has several different color
    buffers GL_FRONT_LEFT, GL_FRONT, GL_BACK,
    GL_RIGHT etc. They are used for double-buffering
    and stereoscopic displays.

3
How to Read from a Frame Buffer
  • You can read from a frame buffer. For example,
    you may want to read the color buffer, and save
    it to an image file.
  • First, select a buffer to read from with
    glReadBuffer(). For example, glReadBuffer(GL_BACK)
    .
  • For double-buffered systems, the default is
    GL_BACK.
  • Next, set the pixel store mode with glPixelStore.
  • Write glPixelStorei(GL_UNPACK_ALIGNMENT,1) so
    that when writing to memory, the start of each
    pixel row in memory is byte-aligned.
  • Finally, call glReadPixels().

4
glReadPixels
  • read a block of pixels from the frame buffer
  • void glReadPixels(GLint x, GLint y, GLsizei
    width, GLsizei height, GLenum format, GLenum
    type, GLvoid pixels)
  • x, y
  • Specify the window coordinates of the first pixel
    that is read from the frame buffer. This location
    is the lower left corner of a rectangular block
    of pixels.
  • width, height
  • Specify the dimensions of the pixel rectangle.
    width and height of one correspond to a single
    pixel.
  • format
  • Specifies the format of the pixel data. The
    following symbolic values are accepted
    GL_COLOR_INDEX, GL_STENCIL_INDEX,
    GL_DEPTH_COMPONENT, GL_RED, GL_GREEN, GL_BLUE,
    GL_ALPHA, GL_RGB, GL_RGBA, GL_LUMINANCE, and
    GL_LUMINANCE_ALPHA.
  • type
  • Specifies the data type of the pixel data. Must
    be one of GL_UNSIGNED_BYTE, GL_BYTE, GL_BITMAP,
    GL_UNSIGNED_SHORT, GL_SHORT, GL_UNSIGNED_INT,
    GL_INT, or GL_FLOAT.
  • pixels
  • Pointer to memory to store the pixel data read
    from the frame buffer
  • glReadPixels returns values from each pixel with
    lower left-hand corner at (x i, y j) for 0 lt
    i lt width and 0 lt j lt height. This pixel is said
    to be the ith pixel in the jth row. Pixels are
    returned in row order from the lowest to the
    highest row, left to right in each row.

5
Use of glReadPixels
Example How to determine which object is shown
on a selected pixel
GLubyte parray6006003 // (fx,fy) are the
coordinates of the point clicked by the user
starting from the bottom left // of the display
window. // (0,0) is the bottom left pixel of the
display void selectgeometry(int fx, int fy)
int ind sideGlutDisplayID() // display the
scene without lighting, and without
glutSwapBuffers
// color each object with a unique R color
equal to its unique ID glPixelStorei(GL_UNPACK_
ALIGNMENT,1) glReadPixels(0,0,sidewidth,sidehe
ight,GL_RGB,GL_UNSIGNED_BYTE,parray) ind
(int)(parrayfyfx0) // ind will now contain
the ID of the object selected
6
Depth Buffer Method
  • The Depth Buffer Method (also called the z-buffer
    method) is used to handle occlusion, so that only
    the surface that is closest to the camera
    position is shown.
  • Using this approach, two frame buffers need to
    exist. One is the color buffer, which keeps the
    color for each pixel. The other is the depth
    buffer, which keeps the depth of each pixel.

7
Depth Buffer Method (continued)
  • When scan-converting a triangle, use surface
    rendering method to calculate the color at each
    pixel.
  • Also, calculate the normalized depth of the
    pixel. The normalized depth of each vertex of
    each triangle is automatically generated by the
    viewport normalization matrix. Use the
    incremental method (explained in the following
    slide) to calculate each pixel depth from the
    vertex depths.
  • Next, compare the new pixel depth with the depth
    stored for this pixel in the depth buffer.
  • If the new pixel depth is smaller than the stored
    depth, it means that the new pixel is nearer to
    the viewer. Therefore, the new color replaces the
    stored color for this pixel in the color buffer,
    and the new depth replaces the old depth for this
    pixel in the depth buffer.
  • Otherwise (if the new pixel depth is greater than
    the stored depth), the new pixel is ignored.
    Color and depth buffers for this pixel are not
    changed.

8
Disadvantages of the Depth Buffer Method
  • Need a lot of memory for the depth buffer.
  • Suppose we need 16 bits depth for each pixel, and
    there are 1024 x 1024 pixels, then we would need
    2 MB of space for the depth buffer.
  • Can possibly waste a lot of time calculating the
    color of each pixel of a triangle, and then later
    get completely obscured by another triangle.
  • Precision is less for the depths of triangles
    further away from the camera position. (The depth
    calculated by the viewport transformation is only
    the pseudo-depth.)

9
Using Depth Buffer in OpenGL
// tell glut to give you a depth
buffer glutInitDisplayMode(GLUT_DOUBLEGLUT_RGBGL
UT_DEPTH) // need to reset the depth buffer
before drawing each frame glClear(GL_DEPTH_BUFFER_
BIT) // need to enable depth testing glEnable(GL
_DEPTH_TEST) // set the depth function. GL_LESS
means that the incoming pixel // passes the
depth test if its z-value is less than the
currently stored // value. GL_LESS is the default
anyway (so, the following line need // not be
called). glDepthFunc(GL_LESS)
10
Transparency
  • If a is the opacity of the triangle (0.0 lt a lt
    1.0), then the color of the rendered pixel should
    be Cfinal a x Creflected (1.0 a) x Cbehind
  • To enable transparency, call glEnable(GL_BLEND)
    and glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPH
    A) and specify colors in RGBA mode with
    glColor4f(r,g,b,a)
  • Using normal OpenGL depth buffering, to do
    transparency correctly, the programmer needs to
    ensure that polygons are rendered back to front.

11
OpenGL Color Blending
glEnable(GL_BLEND) glBlendFunc (GL_SRC_ALPHA,
GL_ONE_MINUS_SRC_ALPHA) //
source_factor, destination_factor glColor4f(1.0,0.
0,0.0,0.9) // almost opaque glLineWidth(5) glBeg
in(GL_LINES) glVertex3f(0.0,0.0,0.0)
glVertex3f(1.0,1.0,0.0) glEnd() glDisable(GL_BL
END)
final_color source_factor object_color
destination_factor frame_buffer_color
12
OpenGL Transparency Example
glClear( GL_COLOR_BUFFER_BIT GL_DEPTH_BUFFER_BIT
) glEnable(GL_DEPTH_TEST) glEnable(GL_BLEND)
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA)
glColor4f(1.0,0.0,0.0,1.0) glBegin(GL_QUADS) g
lVertex3f(0.0,0.0,0.0) glVertex3f(200.0,0.0,0.0)
glVertex3f(200.0,200.0,0.0) glVertex3f(0.0,200.0
,0.0) glEnd() glColor4f(1.0,1.0,0.0,0.8) glBeg
in(GL_QUADS) glVertex3f(100.0,100.0,0.1) glVerte
x3f(300.0,100.0,0.1) glVertex3f(300.0,300.0,0.1)
glVertex3f(100.0,300.0,0.1) glEnd()
13
Simulate Atmosphere (Fog) Effect
  • Fog factor f computed as follows
  • If mode is GL_EXP, then f e-(density.z)
  • If mode is GL_EXP2, then f e-(density.z)
  • If mode is GL_LINEAR, then f (end-z)/(end-start)
  • Note GL_LINEAR gives you the most control, and
    has best effects
  • Then, final fragment color C is C f.Ci
    (1-f).Cf
  • where Ci is the fragments original color and Cf
    is the fog color

2
float col4 0.5, 0.5, 0.5, 1.0
glEnable(GL_FOG) // enable fog
effect glFogi(GL_FOG_MODE,GL_EXP) // set the
mode to GL_EXP, could also be GL_EXP2 or
GL_LINEAR glFogfv(GL_FOG_COLOR, col) // set the
color of the fog glFogf(GL_FOG_DENSITY, 0.35) //
set the fog density glHint(GL_FOG_HINT,GL_DONT_CAR
E) // per pixel GL_NICEST or per vertex
GL_FASTEST glFogf(GL_FOG_START, 1.0) // set
start glFogf(GL_FOG_END, 5.0) // set end
  • Note
  • When Fog Mode is set to GL_LINEAR, fog does not
    depend on density.
  • When Fog Mode is set to GL_EXP or GL_EXP2, fog
    does not depend on start and end.

14
Fog Examples
GL_EXP, density0.35
GL_EXP2, density0.35
GL_LINEAR
GL_EXP, density0.65
15
Real-Life Fog in Somewhat Linear
start
end
Note End is where fog reaches 100 saturation
16
Orthographic Projection with Depth
  • Consider Orthographic Projection Matrix
  • It projects from 3D to 2D
  • Does it preserve depth (z) information?

xwmax xwmin xwmax - xwmin
2 xwmax - xwmin
0 0 -
2 ywmax - ywmin
ywmax ywmin ywmax - ywmin
0 -
0
M
-2 znear - zfar
znear zfar znear - zfar
0 0
0 0 0
1
17
Perspective Projection with Depth
  • Consider Perspective Projection Matrix
  • It projects from 3D to 2D
  • Does it preserve depth (z) information?

1 0 0 0 0 1 0 0 0 0 1
0 0 0 1/d 0
Perspective Projection Matrix M
Side question Is perspective projection an
affine transformation?
18
Perspective Projection with Depth
  • In OpenGL, glFrustum allows asymmetric
    perspective view frustum.
  • Step 1 Perform shear to make it symmetric
  • Step 2 Scale the sides of this symmetric frustum
  • Step 3 Perform perspective-normalization
    transformation. This perspective-normalization
    transformation not only does a perspective
    projection, it also preserves depth information
  • The combined matrix for all steps is

-2far/(right-left) 0
(rightleft)/(right-left)
0 0 -2far/(top-bottom)
(topbottom)/(top-bottom) 0
0 0
-(farnear)/(far-near) 2farnear/(far-near)
0 0
-1
0
19
Perspective Projection with Depth
  • After perspective projection with the new
    perspective-normalization matrix, a vertex that
    had a greater z value in Viewing Coordinates will
    still have greater z value in Projected
    Coordinates.
  • However, the transformation is not linear.
  • Points that are far away from the camera will
    have less precision in their depth.
Write a Comment
User Comments (0)
About PowerShow.com