BAE 790I / BMME 231 Fundamentals of Image Processing Class 23

1 / 38
About This Presentation
Title:

BAE 790I / BMME 231 Fundamentals of Image Processing Class 23

Description:

Lens. Retina. Optic Nerve. Image: www.macula.org. Vitreous ... To mimic this in an imaging system, use a stationary camera with variable focus. ... –

Number of Views:96
Avg rating:3.0/5.0
Slides: 39
Provided by: davidl69
Category:

less

Transcript and Presenter's Notes

Title: BAE 790I / BMME 231 Fundamentals of Image Processing Class 23


1
BAE 790I / BMME 231Fundamentals of Image
ProcessingClass 23
  • Human Vision
  • The eye
  • Adaptation
  • Luminance and brightness
  • MTF of the visual system
  • Range imaging

2
The Eye
Vitreous
Iris
Retina
Cornea
Lens
Optic Nerve
Image www.macula.org
3
The Eye
Vitreous
Iris
Retina
Cornea
Clear, outer protective and refractive layer
Lens
Optic Nerve
Image www.macula.org
4
The Eye
Muscle fibers arranged to create an aperture
Vitreous
Iris
Retina
Cornea
Lens
Optic Nerve
Image www.macula.org
5
The Eye
Vitreous
Iris
Retina
Cornea
Lens
Optic Nerve
  • Additional refraction
  • Ciliary muscles can deform to focus

Image www.macula.org
6
The Eye
Vitreous
Iris
Fluid layer
Retina
Cornea
Lens
Optic Nerve
Image www.macula.org
7
The Eye
Vitreous
Iris
Retina
Cornea
Arrangement of light-sensing neurons
Lens
Optic Nerve
Image www.macula.org
8
The Retina
  • Two types of photoreceptors
  • Rods (100 million) long, thin, high sensitivity
  • Cones (6.5 million) short, thick, low
    sensitivity, color sensitive

9
Levels of Vision
  • Scotopic Low-light (governed by rods)
  • Photopic Bright-light (governed by cones)
  • Mesopic In-between (rods and cones)

10
Adaptation
  • Vision is heavily based on adaptation
  • The eye adapts to the prevailing pattern of
    illumination over space and time
  • Rods adapt to low-light conditions better than
    cones
  • Adaptation is one aspect that makes the human
    visual system very nonlinear.

11
Luminance
  • Define luminance as

Relative luminous efficiency (aperture)
Light intensity
Wavelength
V(l)
An average (depends on adaptation)
Wavelength (nm)
380
540
580
700
12
Brightness
  • Brightness perceived luminance
  • This is a subjective quantity.
  • It allows relative judgments.

13
Simultaneous Contrast
The squares appear to have different
brightnesses, but they have equal luminance.
14
Perceived Backgrounds
When the object appears to be a single object, it
appears to have uniform brightness. When the
object appears to be two objects, each is
associated with its local background.
15
Perceived Backgrounds
The triangles have the same luminance, but may
appear to have different brightnesses depending
on the background they are associated with.
16
Webers Law
  • At the level where the luminance of an object is
    just noticeably different from luminance of
    background, the contrast ratio is a constant.
  • The constant is about .02.
  • Therefore, we can only resolve about 50 gray
    levels.

17
Contrast
  • Contrast is the difference in luminance (or
    brightness) between an object and its background.
  • Contrast has many definitions.
  • Contrast is perceived logarithmically.

18
PSF of the Visual System
h(angle)
Low-pass characteristic Limits resolution
Negative side lobes cause edge enhancement
angle
19
Lateral Inhibition
  • Edges are perceived differently than they really
    are.
  • The visual system enhances edges.

A receptor provides a negative contribution to
the visual field of its neighbor


-
-
-
-
response
-
20
Mach Bands
Each band has uniform luminance, but brightness
does not appear uniform. The visual system
enhances edges.
21
Mach Bands
Luminance changes monotonically in the central
region, but brightness does not appear monotonic.
22
MTF of the Visual System
Contrast sensitivity
The system is less sensitive to low spatial
frequencies
Log spatial frequency, cycles per degree
23
Edge Completion
Edges are perceived even when they are not
present.
24
Adaptation to Size
Patterns cause the eye to adapt to background.
The inner circles are the same size.
25
Higher-level Organization
  • Receptors are organized into nets that contribute
    to sensitivity to sizes and orientations.

Different neural paths combine input from
different receptors
26
Human Vision and Range
  • The processes discussed so far are single-eye
    processes.
  • Consider depth or range information
  • Human visual depth cues
  • Parallax
  • Eye focus
  • Inspection

27
Stereoscopy
  • Depth from parallax Consider two cameras

Image plane
Focal plane
C1(b)
b
a
r
C2(b)
Parallax p C1(b)-C2(b) d a/r
The difference in position of the projection of
the object in the two cameras yields depth.
d
28
Stereoscopy
  • Humans intuitively understand parallax because we
    have information about eye position and eye
    angle.
  • Animals with eyes on the sides have little depth
    perception.

29
Stereoscopy
  • We can mimic this in an imaging system.
  • Two cameras at fixed distances and known focal
    length
  • Depth resolution is limited by camera resolution
    and field of view
  • Larger separation is better
  • Object must have recognizable features
    (camouflage)
  • No occlusion

30
Range from Focus
  • A camera system has the property of depth of
    field or depth of focus

Objects at some range of depths will be more or
less in focus
Given this distance
31
Range from Focus
  • Depth of focus depends on aperture

Small aperture exhibits less blurring
32
Range from Focus
  • Humans obtain some depth information from an
    intuitive sense of how the eye is focused.

33
Depth from Focus
  • To mimic this in an imaging system, use a
    stationary camera with variable focus.
  • Consider that, for a given focus, PSF changes
    with distance.
  • If we can estimate PSF and we know the properties
    of the camera, we can estimate range.

34
Depth from Inspection
  • As we move, we understand object position from
    how objects change position relative to one
    another.
  • Humans can estimate relative depth from head
    movement, if no occlusion.

35
Depth from Motion
  • To mimic this in an imaging system, use a moving
    camera

a
a
b
b
36
Depth from Motion
  • If we can estimate how an object changes as the
    camera moves, we can estimate depth.
  • Must have recognizable objects
  • This is longitudinal tomography
  • Camera motion need not be linear
  • Conventional tomography

37
Coded Aperture Imaging
  • All of these are examples of coded aperture
    imaging.

The source at a given depth produces a
distinctive pattern.
Source 1
Source 2
aperture
Image plane
38
Coded Aperture Imaging
  • The coded aperture is just a PSF that varies with
    depth.
  • To solve, try to find the 3D source distribution
    that best fits the measured data.
  • Use statistical restoration methods in 3D
Write a Comment
User Comments (0)
About PowerShow.com