Achromatic light (black and white) - PowerPoint PPT Presentation

About This Presentation
Title:

Achromatic light (black and white)

Description:

Achromatic light (black and white) Perceptual Issues Humans can discriminate about 0.5 minute of arc - At fovea, so only in center of view, 20/20 vision – PowerPoint PPT presentation

Number of Views:76
Avg rating:3.0/5.0
Slides: 3
Provided by: tch78
Category:

less

Transcript and Presenter's Notes

Title: Achromatic light (black and white)


1
Achromatic light (black and white) Perceptual
Issues Humans can discriminate about 0.5 minute
of arc - At fovea, so only in center of view,
20/20 vision - At 1m, about 0.2mm (Dot Pitch of
monitors) - Limits the required number of
pixels Humans can discriminate about 8(9) bits of
intensity Intensity Perception Humans are
actually tuned to the ratio of intensities. - So
we should choose to use 0, 0.25, 0.5 and 1 Most
computer graphics ignores this - It uses 0,
0.33, 0.66, and 1 Dynamic Range Humans can see
contrast at very low and very high light levels,
but cannot see all levels all the time - use
adaptation to adjust - high range even at one
adaptation level Film has low dynamic range
1001 Monitors are even worse 701 Display on a
Monitor Voltage to display intensity is not
linear (Digital, Analog). Gamma Control (Gamma
correction) Idisplay ?Ito-moniter?, so
Ito-moniter ?Idisplay1/ ? ?, is controlled by the
user, - Should be matched to a particular
monitor - Typical values are between 2.2 and 2.5
  • Color SpacesThe principle of trichromacy means
    the displayable colors are all the linear
    combination of primariesTaking linear
    combinations of R, G and B defines the RGB color
    space- The range of perceptible colors generated
    by adding some part each of R, G and B.- If R, G
    and B are correspond to a monitors phosphors
    (monitor RGB), the space is the range of colors
    displayable on the monitor.RGB- Only a small
    range of all the colors humans are perceivable
    (i.e. no magenta on a monitor)- It is not easy
    for humans to say how much of RGB to use to make
    a given color- Perceptually non-linear - two
    points a certain distance apart in one part of
    the space may be perceptually different- Two
    other points, the same distance apart in another
    part of the space, may be perceptually the
    sameCIE-XYZ and CIE-xyColor matching functions
    are everywhere positive- Cannot produce the
    primaries need negative light!- But, can still
    describe a color by its matching weights- Y (Z?)
    component intended to correspond to
    intensityMost frequently set xX/(XYZ) and
    yY(XYZ)- x,y are coordinates on a constant
    brightness slice- Linearity colors obtainable
    by mixing A, B lie on line segment AB.-
    Monochromatic colors (spectral colors) run along
    the Spectral Locus- Dominant wavelength
    Spectral color that can be mixed with white to
    match- Purity (distance from C to spectral
    locus)/(distance from white to spectral locus)-
    Wavelength and purity can be used to specify
    color.- Complementary colorscolors that can be
    mixed with C to get whiteLinear Transform
    x,y,zT xr,xg,xb \\ yr,yg,yb \\
    zr,zg,zbr,g,bTGamut The range of colors that
    can be produced by a spaceYIQ mainly used in
    television- Y is (approximately) intensity, I, Q
    are chromatic properties- Linear color space
    there is lin. trans. from XYZ(and RGB) to YIQ -
    I and Q can be transmitted with low
    bandwidthHSV- Hue the color family red,
    yellow, blue- Saturation The purity of a
    color white is totally unsaturated- Value The
    intensity of a color white is intense, black
    isnt- Space looks like a cone parts of the
    cone can be mapped to RGB
  • Not a linear space no linear transform to take
    RGB to HSV. Uniform Color SpacesDistance in the
    color space corresponds to perceptual
    distance.Only works for local distances Red
    from green? is hard to define.MacAdams ellipses
    defines perceptual distance.CIE uv is
    non-linear, color differences are more
    uniform.u,vT1/(X15Y3Z)4X,9YT Subtractive
    mixingCyanWhite-Red, MagentaW-Green,YellowW-Bl
    ueLinear transform between XYZ and CMY

Signal ProcessingSpatial domain signal is given
as values at points in spaceFreq. dom. signal
is given as values of frequency
componentsPeriodic signal can be represented as
a sum of sine and cosine waves with harmonic
frequencies.Non-periodic function can be
represented as a sum of sins and coss of
(possibly) all frequenciesF(?) is the spectrum
of f(x)- Spectrum is how much of each frequency
is present in the functionFourier Transform-
Box f(x) 1, xlt1/2, 0, otheriwse ?
F(w)sin(?f)/?fsinc(f)- Cos f(x)cos(x) ?
F(w)delta(w-1)delta(w1)- Sin f(x)sin(x) ?
F(w)delta(w-1)-delta(w1)- Impulse
f(x)delta(x) ? F(w)1- Shah Function
? - Gaussian 1/2? exp(-x2/2) ?
GaussianQualitative PropertiesSharp edges give
high frequenciesSmooth variations give low
frequenciesBandlimited if its spectrum has no
frequencies above a maximum limit(sin and cos
are, Box and Gaussian are not)
Unary OperatorsDarken Makes an image darker (or
lighter) without affecting its opacity.Dissolve
Makes an image transparent without affecting its
color.PLUS CoCfCgExampleObtaining ? Values
1.Hand generate (paint a grayscale
image)2.Automatically create by segmenting an
image into foreground background-
Blue-screening is the analog method- Why blue?
Its the least component in human body.3. Store
pixel depth instead of alpha- compositing can
truly take into account foreground and background
TransformationsCoordinate Systemsare used to
describe the locations of points in
space.Multiple coordinate systems make graphics
algorithms easier to understand and implement-
Some operations are easier in one coordinate
system than in another (Box example)Transformatio
ns convert points between coordinate systems2D
Affine TransformationsWhy? Affine
transformations are linear- Transforming all the
individual points on a line gives the same set
of points as transforming the endpoints and
joining them- Interpolation is the same in
either space.2D Translation 2D Scaling 2D
Rotation X-Axis Shear Reflect About X
Axis Rotating About An Arbitrary PointSay you
wish to rotate about the point (a,b)Translate
such that (a,b) is at (0,0) x1xa, y1ybRotate
x2(x-a)cos?-(y-b)sin?, y2(x-a)sin?(y-b)cos?Tra
nslate back again xfx2a, yfy2bScaling an
Object About An Arbitrary PointTranslate, Scale,
and Translate againHomogeneous CoordinatesUse
three numbers to represent a point(x,y)(wx,wy,w)
for any constant w?0Typically, (x,y) becomes
(x,y,1)Translation can now be done with matrix
multiplication!Translation Rotation Scaling
Advantages1.Unified view of transformation as
matrix multiplication- Easier in hardware and
software2.To compose transformations, simply
multiply matrices3.Allows for non-affine
transformations- Perspective projections!
Bends, tapers, many others.3D RotationRotation
is about an axis in 3D passing through the
origin.Any matrix with an orthonormal top-left
3x3 sub-matrix is a rotation- Rows are mutually
orthogonal (0 dot product)- Determinant is 1-
columns are also orthogonal, and the transpose is
equal to the inverseProblemsSpecifying a
rotation really only requires 3 numbers- Axis (a
unit vector, requires 2) and Angle to rotate
Rotation matrix has a large amount of
redundancy- Orthonormal constraints reduce
degrees of freedom back down to 3- Keeping
orthonormal is difficult when transformations are
combinedAlternative Representations 1.Specify
the axis and the angle - Hard to compose
multiple rotations2.Euler angles Specify how
much to rotate about X, then how much about Y,
then how much about Z- Hard to think about, and
hard to compose3.Specify the axis, scaled by the
angle- Only 3 numbers, sometimes called the
exponential map4.Quaternions- 4-vector related
to axis and angle, unit magnitude (Rotation about
axis (nx,ny,nz) by angle ?. )- Easy to compose-
Easy to go to/from rotation matrix- Only
normalized quaternions represent rotations, but
you can normalize them just like vectors, so it
isnt a problem
Filitering ?Convolution
TheoremConvolution in spatial domain?Multiplicati
on in freq. domainMultiplication in spatial
domain?Convolution in freq. domainAliasing If
the sampling rate is too low, high frequencies
get reconstructed as lower frequencies- High
freq.s from one copy get added to low freq.s from
anotherPoor reconstruction also results in
aliasing Nyquist frequency minimum freq. with
which functions must be sampled twice the
maximum freq. present in the signalFiltering
Algorithm Box Filter Spatial Box Freq.
sincBox filters smooth by averaging neighborsIn
frequency domain, keeps low frequencies and
attenuates (reduces) high frequencies. Bartlett
Filter Spatial Triangle(boxXbox) Freq. sinc2
Attenuates high frequencies more than a
box.Guassian FilterAttenuates high frequencies
even furtherIn 2d, rotationally symmetric, so
fewer artifacts1D to 2D Filter Multiply 2 1D
masks together using outer productM is 2D mask,
m is 1D mask High-Pass Filters can be obtained
from a low-pass filter- Subtracting the smoothed
image from the original means subtracting out
the low frequencies, and leave the high
frequencies.High-pass masks come from matrix
subtractionEdge EnhancementAdding high
frequencies back into the image enhances
edgesImage Image Image smooth(Image)Fixi
ng Negative ValuesTruncate Chop off values
below min or above maxOffset Add a constant to
move the min value to 0Re-scale Rescale the
image values to fill the range (0,max)
  • ColorLight and ColorThe frequency of light
    determines its color- Frequency, wavelength,
    energy all relatedDescribe incoming light by a
    spectrum- Intensity of light at each frequency
  • - Wavelengths in the visible spectrum between
    the infra-red (700nm) and the ultra-violet
    (400nm)
  • Red paint absorbs green and blue wavelengths, and
    reflects red wavelengths, resulting in
    you seeing a red appearance
  • Sensor is defined by its response to a
    frequency distribution.Expressed sensitivity vs.
    wavelength, ?(?)- For each unit of energy at the
    given wavelength, how much voltage/impulses/whatev
    er the sensor provides.
  • To compute the response, take Int?(?) E(?)d?-
    E(?) is the incoming energy at the particular
    wavelengthChanging ResponseTake a white sensor
    and change it into a red sensor? Use red
    filters.Can not change a red sensor into a white
    sensor.Assume your eye is a white sensor. Why
    you can see a black light (UV) shining on a
    surface?- Such surfaces are fluorescent Change
    the frequency of light- Your eye is not really a
    white sensor - it just approximates oneSeeing in
    ColorRods work at low light levels and do not
    see colorCones come in three types
    (experimentally and genetically proven), each
    responds in a different way to frequency
    distributions- L-con red- M-con green-
    S-con blueColor PerceptionColors may be
    perceived differently - Affected by 1.other
    nearby colors 2.adaptation to previous views 3.
    state of mindColor DeficiencyRed-green color
    blindness in men- Red and green receptor genes
    are carried on X chromosome - Most of them have
    two red genes or two green genesOther color
    deficiencies- Anomalous trichromacy,
    Achromatopsia, Macular degeneration- Deficiency
    can be caused by the central nervous system, by
    optical problems in the eye, injury, or by absent
    receptorsTrichromacyExperiment- Show a target
    color beside a user controlled color- User has
    knobs that add primary sources to their color-
    Ask the user to match the colors- It is possible
    to match almost all colors using only three
    primary sources - the principle of trichromacy
  • Sometimes, have to add light to the target-
    This was how experimentalists knew there were 3
    types of cones
  • Mathprimaries A, B and C (can be R, G, B, r,
    g, b)
  • Colors MaAbBcC (Additive matching)Gives a
    color description system - two people who agree
    on A, B, C need only supply (a, b, c) to describe
    a color
  • Some colors, MaAbBcC (Subtractive matching)-
    Interpret this as (-a, b, c)- Problem for
    reproducing colors cannot suck light into a
    display deviceColor matching functionsGiven a
    spectrum, how to determine how much each of R, G
    and B to use to match it?For a light of unit
    intensity at each wavelength, ask people to match
    it with R, G and B primariesResult is three
    functions, r(?), g(?) and b(?), the RGB color
    matching functions
  • E(?) the amount of energy at each wavelength.E
    is the color due to E(?).The RGB matching
    functions describe how much of each primary is
    needed to match one unit of energy at each
    wavelength

Color QuantizationIndexed ColorAssume k bits
per pixel (typically 8)Define a color table
containing 2k colors (24 bits per color)Color
QuantizationQuantization Error - Define an error
for each color, c, in the original image
d(c,c1), where c1 is the color c maps to under
the quantization- squared distance in RGB,
distance in CIE uv space- Sum up the error over
all the pixelsUniform Quantization Break the
color space into uniform cells- poor on smooth
gradients (Mach band)Populosity Color
histogram count the number of times each color
appearsChoose the n most commonly occurring
colors- Typically group colors into small cells
firstMap other colors to the closest chosen
color- ignore under-represented but important
colorsMedian Cut Recursively- Find the
longest dimension (r, g, b)- Choose the median
of the long dimension as a color to use- Split
along the median plane, and recurse on both
halves This algorithm is building a kD-tree, a
common form of spatial data structure. It divides
up the space in the most useful way. Mach bands-
The difference between two colors is more
pronounced when they are side by side and the
boundary is smooth.- This emphasizes boundaries
between colors, even if the color difference is
small. - Rough boundaries are averaged by our
vision system to give smooth variation
Image WarpingMapping from the points in one
image to points in anotherf tells where in the
new image to put the data from x in the old
imageReducing Image SizeWarp function f(x)kx,
k gt 1Problem More than one input pixel maps to
each output pixelSolution Apply the filter,
only at desired output locationsEnlarging
ImageWarp function f(x)kx, k lt 1Problem Have
to create pixel dataSolution Apply the filter
at intermediate pixel locationsNew pixels are
interpolated from old onesMay want to
edge-enhance images after enlargingImage
Morphing Process to turn one image into another -
Define path from each point in the original image
to its destination in the output image- Animate
points along pathsFiltering in Color Simply
filter each of R,G and B separatelyRe-scaling
and truncating are more difficult to implement-
Adjusting each channel separately may change
color significantly- Adjusting intensity while
keeping hue and saturation may be best, although
some loss of saturation is probably OK
DitheringWhy? 1.Adding noise along the
boundaries can remove Mach bands. 2.General
perceptive principle replaced structured errors
with noisy ones and people complain
lessBlack-and-white to grayscale
I0.299R0.587G0.114BThreshold Dithering
(Naïve) If the intensity lt 0.5, replace with
black, else replace with white- Not good for
non-balanced brightness Constant Brightness
Threshold to keep the overall image brightness
the same Compute the average intensity over the
image and use a threshold that gives that
average.- i.e. average intensity is 0.6, use a
threshold that is higher than 40 of the pixels,
and lower than the remaining 60- Not good when
the brightness range is smallRandom Modulation
Add a random amount to each pixel before
thresholding- Not good for black and white, but
OK for more colorsOrdered Dithering Define a
threshold matrix- Use a different threshold for
each pixel of the block- Compare each pixel to
its own thresholdClustered Dithering Dot
Dispersion looks
like random
newsprint which looks
betterPattern Dithering Compute the
intensity of each sub-block and index a
pattern.- Pixel is determined only by average
intensity of sub-blockFloyd-Steinberg Dithering
Start at one corner and work through image pixel
by pixel, and threshold each pixel.- Usually top
to bottom in a zig-zagCompute the error at that
pixel, propagate error to neighbors by adding
some proportion of the error to each unprocessed
neighborColor Dithering Same techniques can be
applied, with some modification (FS Error is
diff. from nearest color in the color table)
CompositingCombines components from two or more
images to make a new imageMattes an image that
shows which parts of another image are foreground
objectsTo insert an object into a background-
Call the image of the object the source- Put the
background into the destination- For all the
source pixels, if the matte is white, copy the
pixel, otherwise leave it unchangedBlue Screen
Photograph/film the object in front of a blue BG,
then consider all the blue pixels in the image to
be the background.Alpha Basic idea Encode
opacity information in the imageAdd an extra
alpha channel to each image, RGBA- alpha 1
implies full opacity at a pixel- alpha 0
implies completely clear pixelsPre-Multiplied
Alpha Instead of (R,G,B,?), store (?R,?G,?B,?)To
display and do color conversions, must extract
RGB by dividing out ?- ?0 is always black-
Some loss of precision as ? gets small, but
generally not a problemBasic Compositing
OperationThe different compositing operations
define which image wins in each sub-region of the
composite.At each pixel, combine the pixel data
from f and the pixel data from g with the
equation- F and G describe how much of each
input image survives, and cf and cg are
pre-multiplied pixels, and all four channels are
calculatedOver F1, G1-?f f covers
gInside F?g G0 only parts of f that are
inside g contributeOutside F1-?g G0 only parts
of f outside g contributeAtop F ?g, G1-?f
over but restricted to where there is gXor
F1-?g G1-?f f where there is no g, and g
where there is no fClear F0, G0 fully
transparentSet F1, G0 Copies f into the
composite
  • Viewing TransformationGraphics Pipeline
  • Local Coordinate SpaceDefining individual
    objects in a local coordinate system is easy-
    Define an object in a local coordinate system-
    Use it multiple times by copying it and
    transforming it into the global system- This is
    the only effective way to have libraries of 3D
    objects, and such libraries do existGlobal
    Coordinate SystemEverything in the world is
    transformed into one coordinate system - the
    global coordinate system-Some things, like
    dashboards, may be defined in a different space,
    but well ignore that
  • Lighting (locations, brightness and types), the
    camera, and some higher level operations, such as
    advanced visibility computations, can be done
    hereView SpaceAssociate a set of axes with the
    image plane- The image plane is the plane in
    space on which the image should appear, like the
    film plane of a camera- One normal to, one up
    in, and one right in the image plane- Some
    camera parameters are easy to define(focal
    length, image size)- Depth is represented by a
    single number in this space3D Screen SpaceA
    cube -1,1-1,1-1,1 canonical view
    volume- Parallel sides make many operations
    easierWindow Space also called screen space.
    Convert the virtual screen into real screen
    coordinates- Drop the depth coordinates and
    translateThe windowing system takes care of this

2
3D Screen to Window TransformWindows are
specified by an origin, width and height- Origin
is either bottom left or top left corner,
expressed as (x,y) on the total visible screen on
the monitor or in the framebuffer- This
representation can be converted to (xmin,ymin)
and (xmax,ymax) Orthographic
ProjectionOrthographic projection projects all
the points in the world along parallel lines onto
the image plane- Projection lines are
perpendicular to the image plane- Like a camera
with infinite focal lengthSimple Projection
ExampleThe region of space that we wish to
render as a view volume- Assume Viewer is
looking in z, with x to the right and y up-
near zn - far, zf (fltn) - left, xl-
right, xr(rgtl)- top, yt- bottom,
yb(bltt)General Projection CasesSpecifying a
ViewThe center of the image plane, (cx,cy,cz)A
vector that points back toward the viewer
(dx,dy,dz)- normal to the image planeA
direction that we want to appear (up) in the
image- This vector does not have to be
perpendicular to nSize of the view volume
l,r,t,b,n,f- Specified with respect to the image
plane, not the worldView SpaceOrigin at the
center of the image plane (cx,cy,cz)Normal
vector the normalized viewing direction
nduupn, normalized.vnuWorld to View
Transformation1. Translate the world so the
origin is at (cx,cy,cz)2. Rotation, such that
(a) u in world space should be (1,0,0) in view
space (b) v should be (0,1,0) (c) n should be
(0,0,1) Perspective Projection - Works like
a pinhole camera- Distant Objects Are Smaller-
Parallel lines meetVanishing pointsEach set of
parallel lines (direction) meets at a different
point The vanishing point for this direction-
Classic artistic perspective is 3-point
persepctive- Sets of parallel lines on the same
plane lead to collinear vanishing points the
horizon for that plane- Good way to spot faked
imagesBasic Perspective ProjectionAssume with x
to the right, y up, and z back toward the
viewerAssume the origin of view space is at the
center of projectionDefine a focal distance, d,
and put the image plane there (note d is
negative) Perspective View VolumeNear and
far planes are parallel to the image plane zvn,
zvfOther planes all pass through the center of
projection (the origin of view space)- The left
and right planes intersect the image plane in
vertical lines- The top and bottom planes
intersect in horizontal lines We want to
map all the lines through the center of
projection to parallel linesGeneral
Perspective Complete Perspective
Projection Near/Far and Depth ResolutionIt
may seem sensible to specify a very near clipping
plane and a very far clipping plan, but, a bad
idea- OpenGL only has a finite number of bits
to store screen depth- Too large a range reduces
resolution in depth - wrong thing may be
considered (in front)Always place the near plane
as far from the viewer as possible, and the far
plane as close as possible
ClippingParts of the geometry may lie outside
the view volume- View volume maps to memory
addresses- Out-of-view geometry generates
invalid addresses- Geometry outside the view
volume also behaves very strangely under
perspective projectionClipping removes parts of
the geometry outside the viewBest done in screen
space before perspective divide (dividing out the
homogeneous coordinate)Clipping PointsA point
is inside the view volume if it is on the
(inside) of all the clipping planes- The normals
to the clip planes point inward, toward the
visible stuff Why clipping is done in canonical
view space?Ie. to check against the left
plane- X coordinate in 3D must be gt -1- In
homogeneous screen space, same as xscreengt
-wscreenIn general, a point, p, is inside a
plane if- the plane as nxxnyynzzd0, with
(nx,ny,nz) pointing inward- and
nxpxnypynzpzdgt0Sutherland-Hodgman ClipClip
the polygon against each edge of the clip region
in turn- Clip polygon each time to line
containing edge- Only works for convex clip
regions To clip a polygon to a line/plane-
Consider the polygon as a list of vertices- One
side of the line/plane is considered inside the
clip region, the other side is outside- We are
going to rewrite the polygon one vertex at a time
the rewritten polygon will be the polygon
clipped to the line/plane- Check start vertex
if inside, emit it, otherwise ignore it-
Continue processing vertices as followsLook at
the next vertex in the list, and the edge from
the last vertex to the next. If the- edge
crosses the clip line/plane from out to in emit
crossing point, next vertex- edge crosses clip
line/plane from in to out emit crossing- edge
goes from out to out emit nothing- edge goes
from in to in emit next vertexInside-Outside
Testing Finding Intersection PtsUse the
parametric form for the edge between x1 and
x2For planes of the form xaSimilar forms
for ya, zaInside/Outside in Screen Space- In
canonical screen space, clip planes are xs1,
ys1, zs1Inside/Outside reduces to
comparisons before perspective divideClipping
LinesCohen-SutherlandWorks basically the same
as Sutherland-HodgmanClip line against each edge
of clip region in turn- If both endpoints
outside, discard line and stop- If both
endpoints in, continue to next edge (or finish)-
If one in, one out, chop line at crossing pt and
continueSome cases lead to premature acceptance
or rejection- If both endpoints are inside all
edges- If both endpoints are outside one
edgeGeneral rule if a fast test can cover many
cases, do it firstDetails Only need to clip
line against edges where one endpoint is outUse
outcode to record endpoint in/out wrt each edge.
One bit per edge, 1 if out, 0 if in.- Trivial
reject outcode(x1)outcode(x2)!0- Trivial
accept outcode(x1)outcode(x2)0- Which edges
to clip against? outcode(x1)outcode(x2)L
iang-Barsky ClippingParametric clipping - view
line in parametric form and reason about the
parameter values- More efficient, as not
computing the coordinate values at irrelevant
vertices- Works for rectilinear clip regions in
2D or 3D- Clipping conditions on parameter Line
is inside clip region for values of t such that
(for 2D) Left edge is 1,
right edge is 2, top
edge is 3, bottom is 4 When
pklt0, as t increases line goes from outside to
inside enteringWhen pkgt0, line goes from
inside to outside leavingWhen pk0, line is
parallel to an edge (clipping is easy)If there
is a segment of the line inside the clip region,
sequence of infinite line intersections must go
enter, enter, leave, leaveAlgorithm - Compute
entering t values, which are qk/pk for each
pklt0- Compute leaving t values, which are qk/pk
for each pkgt0- Parameter value for small t end
of line istsmall max(0, entering ts)-
parameter value for large t end of line is
tlargemin(1, leaving ts)- if tsmalllttlarge,
there is a line segment - compute endpoints by
substituting t valuesImprovement (and actual
Liang-Barsky)- compute ts for each edge in
turn (some rejects occur earlier like
this)Weiler Atherton Polygon ClippingFaster
than Sutherland-Hodgman for complex polygonsFor
clockwise polygon- for out-to-in pair, follow
usual rule- for in-to-out pair, follow clip
edgeEasiest to start outsideGeneral
ClippingClipping general against general
polygons is quite hardOutline of Weiler
algorithm- Replace crossing points with
vertices- Double all edges and form linked lists
of edges- Change links at vertices- Enumerate
polygon patchesCan use clipping to break concave
polygon into convex pieces main issue is
inside-outside for edges
RasterizingDrawing PointsWhen points are mapped
into window coordinates, they could land anywhere
not just at a pixel centerSolution is the
simple, obvious one 1.Map to window space 2.Fill
the closest pixel 3.Can also specify a radius
fill a square of that size, or fill a circle
(Square is faster)Drawing Lines- Slope -1 1,
one pixel per column. Otherwise, one pixel per
row- Constant brightness? Lines of the same
length should light the same number of pixels
(normally ignore this)- Anti-aliasing? (Getting
rid of the jaggies)Consider lines of the form
ym x c, where m?y/?x, 0ltmlt1, integer
coordinatesVariety of slow algorithms (Why
slow?)- step x, compute new y at each step by
equation, rounding- step x, compute new y at
each step by adding m to old y,
rounding Bresenhams AlgorithmPlot the pixel
whose y-value is closest to the lineGiven
(xi,yi), must choose from either (xi1,yi1) or
(xi1,yi)Compute a decision variable- Value
that will determine which pixel to draw- Easy to
update from one pixel to the nextDecision
Variabled1ltd2 gt pi negative gt next point at
(xi1,yi)d1gtd2 gt pi positive gt next point at
(xi1,yi1)AlgorithmFor integers, slope between
0 and 1- xx1, yy1, p2 ? y - ? x, draw (x,
y) - until xx2 - xx1 - pgt0 ?
yy1, draw (x, y), pp2 ?y - 2 ?x - plt0?
yy, draw (x, y), pp2 ?y
VisibilityGiven a set of polygons, which is
visible at each pixel? (in front, etc.). Also
called hidden surface removalAlgorithms known
have two main classes- Object precision
computations that decompose polygons in world to
solve- Image precision computations at the
pixel levelAll the spaces in the viewing
pipeline maintain depth, so we can work in any
space- World, View and Canonical Screen spaces
might be used- Depth can be updated on a
per-pixel basis as we scan convert polygons or
lines1.Efficiency it is slow to overwrite
pixels, or scan convert things that cannot be
seen2.Accuracy - answer should be right, and
behave well when the viewpoint moves3.Complexity
- object precision visibility may generate many
small pieces of polygonPainters Algorithm-
Choose an order for the polygons based on some
choice (e.g. depth to a point on the polygon)-
Render the polygons in that order, deepest one
firstDifficulty - works for some important
geometries (2.5D)- doesnt work in this form for
most geometriesZ-buffer (depth buffer) (Image
Precision)For each pixel on screen, have at
least two buffers-Color buffer stores the
current color of each pixel-Z-Buffer stores at
each pixel the depth of the nearest thing seen so
farInitialize this buffer to a value
corresponding to the furthest ptAs a polygon is
filled in, compute the depth value of each pixel
if depth lt z-buffer depth, fill in pixel color
and new depthAdvantages - Simple and now
ubiquitous in hardware- Computing the required
depth values is simpleDisadvantages - Depth
quantization errors can be annoying-
Over-renders - worthless for very large
collections of polygons- Cant easily do
transparency or filtering for anti-aliasingA-buff
er (Image Precision)Handles transparent surfaces
and a form of anti-aliasingAt each pixel,
maintain a list of polygons sorted by depth, and
a sub-pixel coverage mask for each polygon-
Sub-pixel mask Matrix of bits saying which parts
of the pixel are covered by the
polygonAlgorithm When drawing a pixel (first
pass)- if polygon is opaque and covers pixel,
insert into list, removing all polygons farther
away- if polygon is transparent or only
partially covers pixel, insert into list, but
dont remove farther polygonsAlgorithm
(Rendering pass)- At each pixel, traverse buffer
using polygon colors and coverage masks to
composite.Advantage- Do more than Z-buffer
Anti-aliasing, transparent surfaces- Coverage
mask idea can be used in other visibility
algorithmsDisadvantages - Not in hardware, and
slow in software- Still at heart a z-buffer
Over-rendering and depth quantizationScan Line
Algorithm (Image Precision)Assume polygons do
not intersect one anotherObservation across any
given scan line, the visible polygon can change
only at an edgeAlgorithm - fill all polygons
simultaneously- at each scan line, have all
edges that cross scan line in AEL- keep record
of current depth at current pixel decide which
is in frontAdvantages - Simple- Potentially
fewer quantization errors (more bits available
for depth)- Dont over-render (each pixel only
drawn once)- Filter anti-aliasing can be made to
work (have information about all polygons at each
pixel)Disadvantages - Invisible polygons clog
AEL, ET- Non-intersection criteria may be hard
to meetDepth Sorting (Object Precision, in view
space)Sort polygons on depth of some
pointRender from back to front (modifying order
on the fly)Rendering For surface S with
greatest depth-If no overlap in depth with other
polygons, scan convert-Else, for overlaps in
depth, test for overlaps in the image plane
-If none, scan convert and go to next polygon-If
S, S1 overlap in depth and in image plane,swap
order and try again-If S, S have been swapped
already, split and reinsertTesting for overlaps
Start drawing when first condition is met-
x-extents or y-extents do not overlap- S is
behind the plane of S1- S1 is in front of the
plane of S- S and S1 do not intersect in the
image planeAdvantages - Filter anti-aliasing
works fine- No depth quantization
errorDisadvantages - Over-rendering -
Potentially large number of splits - ?(n2)
fragments from n polygonsWarnocks Area
SubdivisionExploits area coherence Small areas
of an image are likely to be covered by only one
polygonWhats in front in a given region1.a
polygon is completely in front of everything else
in that region2.no surfaces project to the
region (empty)3.only one surface is completely
inside the region, overlaps the region, or
surrounds the regionAlgorithm 1.Start with whole
image2.If one of the easy cases is satisfied,
draw whats in front 3.Otherwise, subdivide the
region and recurse4.If region is single pixel,
choose surface with smallest depthAdvantages -
No over-rendering- Anti-aliases well - just
recurse deeper to get sub-pixel
informationDisadvantage - Tests are quite
complex and slowBSP-Trees (Object
Precision)Building BSP-Trees 1.Choose polygon
(arbitrary)2.Split its cell using plane on which
polygon lies (May have to chop polygons in two
(Clipping!)) 3.Continue until each cell contains
only one polygon fragment4.Splitting planes
could be chosen in other ways, but there is no
efficient optimal algorithm for building BSP
trees5.Optimal means minimum number of polygon
fragments in a balanced treeBSP-Tree
RenderingThings on the opposite side of a
splitting plane from the viewpoint cannot obscure
things on the viewpoint sideAt each node (for
back to front rendering)1.Recurse down the side
of the sub-tree that does not contain the
viewpoint (Test viewpoint against the split plane
to decide which tree)2.Draw the polygon in the
splitting plane (Paint over whatever has already
been drawn)3.Recurse down the side of the tree
containing the viewpointAdvantages -One tree
works for any viewing point- Filter
anti-aliasing and transparency work - Can also
render front to back, and avoid drawing back
polygons that cannot contribute to the
view Disadvantages -Can be many small pieces of
polygon - Over-rendering
Filling polygonsWhat is inside? Non-exterior
rule A point is inside if every ray to infinity
intersects the polygonNon-zero winding number
rule Draw a ray to infinity that does not hit a
vertex, if the number of edges crossing in one
direction is not equal to the number crossing the
other way, the point is insideParity rule Draw
a ray to infinity and count the number or edges
that cross it. If even, the point is outside, if
odd, its inside What is inside? Assume
sampling with an array of spikes. If spike is
inside, pixel is insideAmbiguous cases What if
a pixel lies on an edge?- Problem because if two
polygons share a common edge, we dont want
pixels on the edge to belong to both- Ambiguity
would lead to different results if the drawing
order were differentRule if (x?, y?) is in,
(x,y) is inExploiting CoherenceScanline
coherence Several contiguous pixels along a row
tend to be in the polygon - a span of pixels -
Consider whole spans, not individual pixelsEdge
coherence The pixels required dont vary much
from one span to the next - Incrementally update
the span endpointsSweep Fill AlgorithmsFill the
bottom horizontal span of pixels move up and
keep fillHave xmin, xmax for each spanDefine
- floor(x) largest integer lt x - ceiling(x)
smallest integer gtxFill from ceiling(xmin) up
to floor(xmax)Edge Table
Active Edge List Dodging
Floating PointFor edge, m?x/?y, which is a
rational numberView x as xixn/?y, with xnlt?y.
Store xi and xnThen x-gtxm is given by-
xnxn?x- if (xngt?y) xixi1 xnxn- ?y
Advantages- no floating point- can tell if x
is an integer or not, and get floor(x) and
ceiling(x) easily, for the span
endpointsAnti-AliasingRecall We cant sample
and then accurately reconstruct an image that is
not band-limited- Infinite Nyquist frequency-
Attempting to sample sharp edges gives jaggies,
or stair-step linesSolution Band-limit by
filtering (pre-filtering)But when doing computer
rendering, we dont have the original continuous
functionPre-Filtered PrimitivesWe can simulate
filtering by rendering thick primitives, with
?, and compositingExpensive, and requires the
ability to do compositingHardware method Keep
sub-pixel masks tracking coverage Post-Filter
ing (Supersampling)Sample at a higher resolution
than required for display, and filter image
downTwo basic approaches-Generate extra
samples, filter the result(traditional
super-sampling)-Generate multiple (say 4)
images, each with the image plane slightly
offset. Then average the images
FOV
Write a Comment
User Comments (0)
About PowerShow.com