Lighting and Shading - PowerPoint PPT Presentation

1 / 81
About This Presentation
Title:

Lighting and Shading

Description:

Cal State San Marcos. Light Emitted from a Surface ... Cal State San Marcos. Irradiance. The irradiance (E) is an integral of all incident radiance. ... – PowerPoint PPT presentation

Number of Views:71
Avg rating:3.0/5.0
Slides: 82
Provided by: IIT89
Category:

less

Transcript and Presenter's Notes

Title: Lighting and Shading


1
Lighting and Shading
2
Lighting and Shading
  • Illumination
  • The flux of light energy from light sources to
    objects in the scene via direct and indirect
    paths
  • Lighting
  • The process of computing the luminous
  • intensity reflected from a specified 3-D point
  • Shading
  • The process of assigning a colors to a pixel

3
Energy and Power of Light
  • Light is a form of energy
  • measured in Joules (J)
  • Power energy per unit time
  • Measured in Joules/sec Watts (W)
  • Also known as Radiant Flux ( )

4
Point Light Source
  • Total radiant flux in Watts
  • Energy emitted in unit time
  • How to define angular dependence?
  • Use solid angle
  • Define power per unit solid angle Radiant
    Intensity (I)
  • Measured in Watts per steradian (W/sr)

5
Light Emitted from a Surface
  • Radiance (L) Power per unit area per unit solid
    angle
  • Measured in W/m2sr
  • dA is projected area perpendicular to given
    direction
  • Radiosity (B) Radiance integrated over all
    directions
  • Power from per unit area, measured in W/m2

6
Light Falling on a Surface
  • Power falling on per unit area Irradiance (E)
  • Measured in W/m2
  • Depends on
  • Distance from the light source inverse square
    law E 1/r2
  • Incident direction cosine law E n.l

7
Irradiance
  • The irradiance (E) is an integral of all
    incident radiance.

8
Reflection
  • The amount of reflected radiance is proportional
    to the incident radiance.

9
BRDF
  • is called Bidirectional
    Reflectance Distribution Function (BRDF)
  • Is a surface property
  • Relates energy in to energy out
  • Depends on incoming and outgoing directions

10
Local Illumination
  • Phong illumination model approximates the BRDF
    with combination of diffuse and specular
    components.

11
Energy Balance Equation
  • The total light leaving a point is given by the
    sum of two major terms
  • Emitted from the point
  • Incoming light from other sources reflected at
    the point

12
Rendering Equation
  • L is the radiance from a point on a surface in a
    given direction ?
  • E is the emitted radiance from a point E is
    non-zero only if x is emissive
  • V is the visibility term 1 when the surfaces
    are unobstructed along the direction ?, 0
    otherwise
  • G is the geometry term, which depends on the
    geometric relationship between the two surfaces x
    and x
  • Its very hard to directly solve this equation.
    Have to use some approximations.

13
Fast and Dirty Approximations (OpenGL)
  • Use red, green, and blue instead of full spectrum
  • Roughly follows the eye's sensitivity
  • Forego such complex surface behavior as metals
  • Use finite number of point light sources instead
    of full hemisphere
  • Integration changes to summation
  • Forego such effects as soft shadows and color
    bleeding
  • BRDF behaves independently on each color
  • Treat red, green, and blue as three separate
    computations
  • Forego such effects as iridescence and refraction
  • BRDF split into three approximate effects
  • Ambient constant, non-directional, background
    light
  • Diffuse light reflected uniformly in all
    directions
  • Specular light of higher intensity in
    mirror-reflection direction
  • Radiance L replaced by simple intensity'' I
  • No pretense of being physically true

14
Approximate Intensity Equation (single light
source)
  •                                                 
                    
  •   stands for each of red, green, blue
  •    is the intensity of the light source
    (modified for distance)
  •         accounts for the angle of the incoming
    light
  • the k are between 0 and 1 and represent
    absorption factors
  •        accounts for any highlight effects that
    depend on the incoming direction
  • use         or constant if there is nothing
    special
  •    is the mirror reflection angle for the light
  • the angle between the view direction and the
    mirror reflection direction
  •       accounts for highlights in the mirror
    reflection direction
  • the superscripts e, a, d, s stand for emitted,
    ambient, diffuse, specular respectively
  • sum over each light l if there are more one

15
OpenGL Lighting Model
  • Local illumination model
  • Only depends relationship to the light source.
  • Dont consider light reflected or refracted from
    other objects.
  • Dont model shadow (can be faked)
  • A point is lighted only if it can see the light
    source.
  • Its difficult to compute visibility to light
    sources in complex scenes. OpenGL only tests if
    the polygon faces to the light source.
  • For a point P on a polygon with norm n, P is
    lighted by the light source Q only if

16
Ambient Light Source
  • An object is lighted by the ambient light even
    if it is not visible to any light source.
  • Ambient light
  • no spatial or directional characteristics.
  • The amount of ambient light incident on each
    object is a constant for all surfaces in the
    scene.
  • A ambient light can have a color.
  • The amount of ambient light that is reflected by
    an object is independent of the object's position
    or orientation.
  • Surface properties are used to determine how
    much ambient light is reflected.

17
Directional Light Sources
  • All of the rays from a directional light source
    have a common direction, and no point of origin.
  • It is as if the light source was infinitely far
    away from the surface that it is illuminating.
  • Sunlight is an example of a directional light
    source.
  • The direction from a surface to a light source
    is important for computing the light reflected
    from the surface.
  • A directional light source has a constant
    direction for every surface. A directional light
    source can be colored.

18
Point Light Sources
  • The point light source emits rays in radial
    directions from its source. A point light source
    is a fair approximation to a local light source
    such as a light bulb.
  • The direction of the light to each point on a
    surface changes when a point light source is
    used. Thus, a normalized vector to the light
    emitter must be computed for each point that is
    illuminated.

19
Other Light Sources
  • Spotlights
  • Restricting the shape of light emitted by a point
    light to a cone.
  • Requires a color, a point, a direction, and a
    cutoff angle to define the cone.
  • Area Light Sources
  • Light source occupies a 2-D area (usually a
    polygon or disk)
  • Generates soft shadows
  • Extended Light Sources
  • Spherical Light Source
  • Generates soft shadows

20
OpenGL Specifications
  • Available light models in OpenGL
  • Ambient lights
  • Point lights
  • Directional lights
  • Spot lights
  • Attenuation
  • Physical attenuation
  • OpenGL attenuation
  • Default a 1, b0, c0

21
Example of OpenGL Light
  • Setting up a simple lighting situation
  • GLfloat ambientIntensity4 0.1, 0.1, 0.1,
    1.0
  • GLfloat diffuseIntensity4 1.0, 0.0, 0.0,
    1.0
  • GLfloat position4 2.0, 4.0, 5.0, 1.0
  • glEnable(GL_LIGHTING) // enable lighting
  • glEnable(GL_LIGHT0) // enable light 0
  • // set up light 0 properties
  • glLightfv(GL_LIGHT0, GL_AMBIENT,
    ambientIntensity)
  • glLightfv(GL_LIGHT0, GL_DIFFUSE,
    diffuseIntensity)
  • glLightfv(GL_LIGHT0, GL_POSITION, position)

22
Ideal Diffuse Reflection
  • Ideal Diffuse Reflection an incoming ray of
    light is equally likely to be reflected in any
    direction over the hemisphere.
  • An ideal diffuse surface is, at the microscopic
    level a very rough surface. Chalk is a good
    approximation to an ideal diffuse surface.
  • The reflected intensity is independent of the
    viewing direction. The intensity does however
    depend on the light source's orientation relative
    to the surface.

23
Computing Diffuse Reflection
  • Angle of incidence The angle between the
    surface normal and the incoming light ray
  • Lambert's law states that the reflected energy
    from a small surface area in a particular
    direction is proportional to cosine of the angle
    of incidence.

24
Diffuse Lighting Examples
  • We need only consider angles from 0 to 90
    degrees. Greater angles (where the dot product is
    negative) are blocked by the surface, and the
    reflected energy is 0.
  • Below are several examples of a spherical
    diffuse reflector with a varying lighting angles.

25
Specular Reflection
  • A specular reflector is necessary to model a
    shiny surface, such as polished metal or a glossy
    car finish.
  • We see a highlight, or bright spot on those
    surfaces.
  • Where this bright spot appears on the surface is
    a function of where the surface is seen from.
    This type of reflectance is view dependent.
  • At the microscopic level a specular reflecting
    surface is very smooth, and usually these
    microscopic surface elements are oriented in the
    same direction as the surface itself.
  • Specular reflection is merely the mirror
    reflection of the light source in a surface. An
    ideal mirror is a purely specular reflector.

26
Reflection
  • The incoming ray, the surface normal, and the
    reflected ray all lie in a common plane.
  • According to Snells Law, The incident angle is
    equal to the reflection on a perfect reflective
    surface.

27
Non-ideal Reflectors
  • Snell's law, however, applies only to ideal
    mirror reflectors. Real materials tend to deviate
    significantly from ideal reflectors. At this
    point we will introduce an empirical model that
    is consistent with our experience, at least to a
    crude approximation.
  • In general, we expect most of the reflected
    light to travel in the direction of the ideal
    ray. However, because of microscopic surface
    variations we might expect some of the light to
    be reflected just slightly offset from the ideal
    reflected ray. As we move farther and farther, in
    the angular sense, from the reflected ray we
    expect to see less light reflected.

28
Phong Illumination
  • One function that approximates this fall off is
    called the Phong Illumination model. This model
    has no physical basis, yet it is one of the most
    commonly used illumination models in computer
    graphics.
  • The cosine term is maximum when the surface is
    viewed from the mirror direction and falls off to
    0 when viewed at 90 degrees away from it. The
    scalar nshiny controls the rate of this fall off.

29
Effect of the nshiny coefficient
  • The diagram below shows the how the reflectance
    drops off in a Phong illumination model. For a
    large value of the nshiny coefficient, the
    reflectance decreases rapidly with increasing
    viewing angle.

30
Computing Phong Illumination
  • The V vector is the unit vector in the direction
    of the viewer and the R vector is the mirror
    reflectance direction. The vector R can be
    computed from the incoming light direction and
    the surface normal

31
Blinn Torrance Variation
  • Jim Blinn introduced another approach for
    computing Phong-like illumination based on the
    work of Ken Torrance. His illumination function
    uses the following equation
  • In this equation the angle of specular
    dispersion is computed by how far the surface's
    normal is from a vector bisecting the incoming
    light direction and the viewing direction.
  • On your own you should consider
  • how this approach and the previous
  • one differ. OpenGL implements this
  • model.

32
Phong Examples
  • The following spheres illustrate specular
    reflections as the direction of the light source
    and the coefficient of shininess is varied.

33
Putting it all together
  • Phong Illumination Model

34
Colored Lights and Surfaces
  • for each light Ii
  • for each color component
  • reflectance coefficients kd, ks, and ka scalars
    between 0 and 1 may or may not vary with color
  • nshiny scalar integer 1 for diffuse surface,
    100 for metallic shiny surfaces

35
Where do we Illuminate?
  • To this point we have discussed how to compute
    an illumination model at a point on a surface.
    But, at which points on the surface is the
    illumination model applied? Where and how often
    it is applied has a noticeable effect on the
    result.
  • Lighting can be a costly process involving the
    computation of and normalizing of vectors to
    multiple light sources and the viewer.
  • For models defined by collections of polygonal
    facets or triangles
  • Each facet has a common surface normal
  • If the light is directional then the diffuse
    contribution is constant across the facet. Why?
  • If the eye is infinitely far away and the light
    is directional then the specular contribution is
    constant across the facet. Why?

36
Flat Shading
  • The simplest shading method applies only one
    illumination calculation for each primitive. This
    technique is called constant or flat shading. It
    is often used on polygonal primitives.
  • Drawbacks
  • the direction to the light source varies over the
    facet
  • the direction to the eye varies over the facet
  • Nonetheless, often illumination is computed for
    only a single point on the facet. Usually the
    centroid.

37
Facet Shading
  • Even when the illumination equation is applied
    at each point of the facet, the polygonal nature
    is still apparent.
  • To overcome this limitation normals are
    introduced at each vertex.
  • different than the polygon normal
  • for shading only (not backface culling or other
    computations)
  • better approximates smooth surfaces

38
Vertex Normals
  • If vertex normals are not provided they can
    often be approximated by averaging the normals of
    the facets which share the vertex.
  • This only works if the polygons reasonably
    approximate the underlying surface.

39
Gouraud Shading
  • The Gouraud shading method applies the
    illumination model at each vertex and the colors
    in the triangles interior are linearly
    interpolated from these vertex values.
  • Implemented in OpenGL as Smooth Shading.
  • Notice that facet artifacts are still visible.

40
Phong Shading
  • In Phong shading (not to be confused with the
    Phong illumination model), the surface normal is
    linearly interpolated across polygonal facets,
    and the Illumination model is applied at every
    point.
  • A Phong shader assumes the same input as a
    Gouraud shader, which means that it expects a
    normal for every vertex. The illumination model
    is applied at every point on the surface being
    rendered, where the normal at each point is the
    result of linearly interpolating the vertex
    normals defined at each vertex of the triangle.
  • Phong shading will usually result in a very
    smooth appearance, however, evidence of the
    polygonal model can usually be seen along
    silhouettes.

41
Gouraud and Phong Shading
42
OpenGL Specifications
  • Each light has ambient, diffuse, and specular
    component.
  • Each light can be a point light, directional
    light, or spot light.
  • Directional light is a point light positioned at
    infinity
  • Shading Models flat and smooth
  • glShadeModel()
  • GL_FLAT, GL_SMOOTH
  • Smooth model uses Gouraud shading

43
OpenGL Examples
  • We have shown how to set up lights in OpenGL
  • Set surface material properties
  • glMaterialf(GLenum face, GLenum pname, GLfloat
    param)
  • glMaterialfv(GLenum face,GLenum pname,GLfloat
    param)
  • Face can be GL_FRONT, GL_BACK, or
    L_FRONT_AND_BACK
  • Pname can be GL_AMBIENT, GL_DIFFUSE, GL_SPECULAR,
    GL_SHINESS, and GL_EMISSION
  • GLfloat mat_specular 1.0, 1.0, 1.0, 1.0
  • GLfloat low_shininess 5.0
  • glMaterialfv(GL_FRONT, GL_SPECULAR,
    mat_specular)
  • glMaterialfv(GL_FRONT, GL_SHININESS,low_shininess)
  • Nonzero GL_EMISSION makes an object appear to be
    giving off light of that color
  • Refer to OpenGL programming guide for more
    details

44
Triangle Normals
  • Surface normals are the most important geometric
    surface characteristic used in computing
    illumination models. They are used in computing
    both the diffuse and specular components of
    reflection.
  • On a faceted planar surface vectors in the
    tangent plane can be computed using surface
    points as follows.
  • Normal is always orthogonal to the tangent space
    at a point. Thus, given two tangent vectors we
    can compute the normal as followsThis normal
    is perpendicular to both of these tangent vectors.

45
Normals of Curved Surfaces
  • Not all surfaces are given as planar facets. A
    common example of such a surface a parametric
    surface. For a parametric surface the three-space
    coordinates are determined by functions of two
    parameters u and v.
  • The tangent vectors are computed with partial
    derivatives and the normal with a cross product

46
Normals of Implicit Surfaces
  • Normals of implicit surfaces S are even simpler
  • This is often called the gradient vector

47
Texture Mappings
48
Mapping Techniques
  • Consider the problem of rendering a sphere in the
    examples
  • The geometry is very simple - a sphere
  • But the color changes rapidly
  • With the local shading model, so far, the only
    place to specify color is at the vertices
  • To get color details, would need thousands of
    polygons for a simple shape
  • Same things goes for an orange simple shape but
    complex normal vectors
  • Solution Mapping techniques use simple geometry
    modified by a mapping of some type

49
textures
The concept is very simple!
50
Texture Mapping
  • Texture mapping associates the color of a point
    with the color in an image the texture
  • Each point on the sphere gets the color of mapped
    pixel of the texture
  • Question to address Which point of the texture
    do we use for a given point on the surface?
  • Establish a mapping from surface points to image
    points
  • Different mappings are common for different
    shapes
  • We will, for now, just look at triangles
    (polygons)

51
Basic Mapping
  • The texture lives in a 2D space
  • Parameterize points in the texture with 2
    coordinates (s,t)
  • These are just what we would call (x,y) if we
    were talking about an image, but we wish to avoid
    confusion with the world (x,y,z)
  • Define the mapping from (x,y,z) in world space to
    (s,t) in texture space
  • To find the color in the texture, take an (x,y,z)
    point on the surface, map it into texture space,
    and use it to look up the color of the texture
  • With polygons
  • Specify (s,t) coordinates at vertices
  • Interpolate (s,t) for other points based on given
    vertices

52
Texture Interpolation
  • Specify where the vertices in world space are
    mapped to in texture space
  • Linearly interpolate the mapping for other points
    in world space
  • Straight lines in world space go to straight
    lines in texture space

t
s
Texture map
Triangle in world space
53
Textures
  • Two-dimensional texture pattern T(s, t)
  • It is stored in texture memory as an nm array of
    texture elements (texels)
  • Due to the nature of rendering process, which
    works on a pixel-to-pixel base, we are more
    interested in the inverse map from screen
    coordinates to the texture coordinates

54
Computing Color in Texture mapping
  • Associate texture with polygon
  • Map pixel onto polygon and then into texture map
  • Use weighted average of covered texture to
    compute color.

55
Basic OpenGL Texturing
  • Specify texture coordinates for the polygon
  • Use glTexCoord2f(s,t) before each vertex
  • Eg glTexCoord2f(0,0) glVertex3f(x,y,z)
  • Create a texture object and fill it with texture
    data
  • glGenTextures(num, indices) to get identifiers
    for the objects
  • glBindTexture(GL_TEXTURE_2D, identifier) to bind
    the texture
  • Following texture commands refer to the bound
    texture
  • glTexParameteri(GL_TEXTURE_2D, , ) to specify
    parameters to use when applying the texture
  • glTexImage2D(GL_TEXTURE_2D, .) to specify the
    texture data (the image itself)

MORE
56
Basic OpenGL Texturing (cont)
  • Enable texturing glEnable(GL_TEXTURE_2D)
  • State how the texture will be used
  • glTexEnvf()
  • Texturing is done after lighting

57
Controlling Different Parameters
  • The texels in the texture map may be
    interpreted as many different things. For
    example
  • As colors in RGB or RGBA format
  • As grayscale intensity
  • As alpha values only
  • The data can be applied to the polygon in many
    different ways
  • Replace Replace the polygon color with the
    texture color
  • Modulate Multiply the polygon color with the
    texture color or intensity
  • Similar to compositing Composite texture with
    base color using operator

58
Texture Stuffs
  • Texture must be in fast memory - it is accessed
    for every pixel drawn
  • If you exceed it, performance will degrade
    horribly
  • There are functions for managing texture memory
  • Skilled artists can pack textures for different
    objects into one map
  • Texture memory is typically limited, so a range
    of functions are available to manage it
  • Specifying texture coordinates can be annoying,
    so there are functions to automate it
  • Sometimes you want to apply multiple textures to
    the same point Multitexturing is now in some
    hardware

59
Yet More Texture Stuff
  • There is a texture matrix in OpenGL apply a
    matrix transformation to texture coordinates
    before indexing texture
  • There are image processing operations that can
    be applied to the pixels coming out of the
    texture
  • There are 1D and 3D textures
  • Mapping works essentially the same
  • 3D used in visualization applications, such a
    visualizing MRI or other medical data
  • 1D saves memory if the texture is inherently 1D,
    like stripes

60
Procedural Texture Mapping
  • Instead of looking up an image, pass the texture
    coordinates to a function that computes the
    texture value on the fly
  • Renderman, the Pixar rendering language, does
    this
  • Available in a limited form with vertex shaders
    on current generation hardware
  • Advantages
  • Near-infinite resolution with small storage cost
  • Has the disadvantage of being slow in many cases

61
Other Types of Mapping
  • Bump-mapping computes an offset to the normal
    vector at each rendered pixel
  • No need to put bumps in geometry, but silhouette
    looks wrong
  • Displacement mapping adds an offset to the
    surface at each point
  • Like putting bumps on geometry, but simpler to
    model
  • All are available in software renderers like
    RenderMan compliant renderers
  • All these are becoming available in hardware

62
Bump Mapping
Textures can be used to alter the surface normal
of an object. This does not change the actual
shape of the surface -- we are only shading it as
if it were a different shape! This technique is
called bump mapping. The texture map is treated
as a single-valued height function. The value of
the function is not actually used, just its
partial derivatives. The partial derivatives tell
how to alter the true surface normal at each
point on the surface to make the object appear as
if it were deformed by the height function.
Since the actual shape of the object does not
change, the silhouette edge of the object will
not change. Bump Mapping also assumes that the
Illumination model is applied at every pixel (as
in Phong Shading or ray tracing).
Swirly Bump Map
Sphere w/Diffuse Texture Bump Map
Sphere w/Diffuse Texture
63
Bump mapping
Compute bump map partials by numerical
differentiation
64
Bump mapping derivation
65
Bump Map Examples
Bump Map
Cylinder w/Diffuse Texture Map
Cylinder w/Texture Map Bump Map
66
Displacement Mapping
We use the texture map to actually move the
surface point. This is called displacement
mapping. How is this fundamentally different than
bump mapping?
The geometry must be displaced before visibility
is determined.
67
Readings
  • Textbook 16.1-16.3
  • OpenGL Programming Guide Chapter 5 and Chap 9.

68
Better Illumination Models
  • Blinn-Torrance-Sparrow (1977)
  • isotropic collection of planar microscopic facets
  • Cook-Torrance (1982)
  • Add wavelength dependent Fresnel term
  • He-Torrance-Sillion-Greenberg (1991)
  • adds polarization, statistical microstructure,
    self-reflectance
  • Very little of this work has made its way into
    graphics H/W.

69
Cook-Torrance Illumination
  • I?,a - Ambient light intensity
  • ka - Ambient surface reflectance
  • I?,i - Luminous intensity of light source i
  • ks - percentage of light reflected specularly
    (notice terms sum to one)
  • kd - Diffuse reflectivity
  • li - vector to light source
  • n - average surface normal at point
  • D - the distribution of microfacet orientations
  • G - the masking and shadowing effects between the
    microfacets
  • F?(?i) - Fresnel conductance term related to
    materials index of refraction
  • v - vector to viewer

70
Microfacet Distribution Function
  • Statistical model of the microfacet variation in
    normal direction
  • Based on a Beckman distribution function
  • Consistent with the surface variations of rough
    surfaces
  • m - the root-mean-square slope of the
    microfacets
  • Small m (e.g., 0.2) pretty smooth surface
  • Large m (e.g., 0.7) pretty rough surface

71
Beckman's Distribution
72
Geometric Attenuation Factor
  • The geometric attenuation factor G accounts for
    microfacet shadowing.
  • in the range from 0 (total shadowing) to 1 (no
    shadowing).
  • There are many different ways that an incoming
    beam of light can interact with the surface
    locally.
  • The entire beam can simply reflect.

73
Blocked Reflection
  • A portion of the out-going beam can be blocked.
  • This is called masking.

74
Blocked Beam
  • A portion of the incoming beam can be blocked.
  • Cook called this self-shadowing.

75
Geometric Attenuation Factor
  • In each case, the geometric configurations can be
    analyzed to compute the percentage of light that
    actually escapes from the surface. The geometric
    factor, chooses the smallest amount of lost light.

76
Fresnel Effect
  • At a very sharp angle surfaces like a book, a
    wood table top, concrete and plastic can almost
    become mirrors.

77
Fresnel Reflection
  • The Fresnel term results from a complete analysis
    of the reflection process while considering light
    as an electromagnetic wave.
  • The behavior of reflection depend on how the
    incoming electric field is oriented relative to
    the surface at the point where the field makes
    contact.

78
Fresnel Reflection
  • The Fresnel effect is wavelength dependent. It
    behavior is determined by the index-of-refraction
    of the material
  • The Fresnel effect accounts for the color change
    of the specular highlight as a function of the
    angle of incidence of the light source,
    particularly on metals (conductors).
  • It also explains why most surfaces approximate
    mirror reflectors when when the light strikes
    them at a grazing angle.

79
Reflectance of Metal
80
Reflectance of Dielectrics
  • Non conducting material, e.g., glass, plastics.

81
Schlick Approximation
  • To calculate F for every angle, we can use
    measurements of F0, the value of F at normal
    incidence.

82
Index of Refraction
83
Remaining Hard Problems
  • Reflective Diffraction Effects
  • thin films
  • feathers of a blue jay
  • Anisotropy
  • brushed metals
  • strands pulled materials
  • satin and velvet cloths
Write a Comment
User Comments (0)
About PowerShow.com