Title: Introduction to Robotics
1Introduction to Robotics
- 4. Mathematics of sensor processing.
2Examples
- Location
- Dead Reckoning.
- Odometry using potentiometers or encoders.
- Steering Differential, Ackerman
- Inertial Navigation Systems (INS).
- Optical Gyro.
- Resonant Fiber Optic Gyro.
- Ranging
- Triangulation
- MIT Near IR Ranging
- Time of flight
3Potentiometers or pots
- Low cost rotational displacement sensors in
applications of - low speed
- medium accuracy
- not involving continuous rotation
- Errors due to
- poor reliability due to dirt
- frictional loading impact of the shaft
- electrical noise, etc.
- Use has fallen off in favor of versatile
incremental optical encoders.
4Dead Reckoning
- Definition is a simple mathematical procedure
for determining the present location of a vessel
(vehicle) by advancing some previous position
through known course and velocity information. - Most simplistic implementation is termed
odometry. - Odometry sensors
potentiometers encoders brush, optical,
magnetic, capacity, inductive
5Introduction to Odometry
B the wheel separation
- Given a two wheeled robot, odometry estimates
position and orientation from left and right
wheel velocities as a function of time.
6Differential Steering
- Two individually controlled drive wheels
- Enables the robot to spin in place
- Robot displacement D and velocity V along the
path of travel are
Displacement and velocity of the left wheel
,
Displacement and velocity of the right wheel
,
circumference of circle traveled by left
wheel
7Differential Steering
Solving for
Similarly,
yields
Solving for
b
d in the denominator is a significant source of
error since, due to the uncertainties associated
with the effective point of contact of the tires.
8Over an infinitesimal time increment, the speed
of the wheels can be assumed constant gt path has
constant radius of curvature
9Differential Steering. Drive controller
wheel rotation
effective left wheel radius
number of counts left encoder
number of counts per wheel revolution
And a similar relation for the right wheel.
The drive controller will attempt to make the
robot travel a straight line by ensuring
and are the same.
Not an accurate method, since effective wheel
radius is a function of the compliance of the
tire and the weight (empirical values, tire
compliance is function of wheel rotation.
10Differential Steering. Other reasons for
inaccuracies.
In climbing over a step discontinuity of height
h, the wheel rotates and so the perceived
distance differs from the actual distance
traveled
This displacement differential between left and
right drive wheels result in an instantaneous
heading change
Floor slippage this problem is especially
noticeable in exterior implementations known as
skid steering. Routinely implemented in
bulldozers and armored vehicles.
Skid steering is employed only in teleoperated
vehicles.
11Ackerman Steering. The method of choice for
outdoor autonomous vehicles.
- Used in order to provide fairly accurate
dead-reckoning solution, while supporting the
traction and ground clearance needs of
all-terrain operations. - Is designed to ensure that when turning, the
inside wheel is rotated to a slightly sharper
angle than the outside wheel, thereby eliminating
geometrically induced tire slippage.
Ackerman equation
relative steering angle of inner/ outer wheel
l longitudinal wheel separation d
lateral wheel separation
- Examples include
- HMMWV based Teleoperated Vehicle (US Army)
Program. - MDARS (Mobile Detection Assessment and Response
System) Exterior - autonomous patrol vehicle.
12Ackerman Steering.
l
x
d
x distance from inside wheel to center of
rotation
13Inertial Navigation
- Continuous sensing of acceleration in each of 3D
axes, and integrating over time to derive
velocity and position. - Implementations are demanding from the standpoint
of minimizing the various error sources. - High quality navigational systems have a typical
drift of 1nm/h and only a few years ago were used
to cost 50K-70K. - High end systems which perform better than 0.1
of the distance traveled and they used to cost
100K - 200K. - Today, relatively reliable equipment to be used
for UGV navigation costs starting 5K. - Low cost fiber optic gyros and solid state
accelerometers were developed for INS.
14Gyroscopes
Mechanical gyroscopes operate by sensing the
change in direction of some actively sustained
angular or linear momentum.
A typical two-axis flywheel gyroscope senses a
change in direction of the angular momentum
associated with a spinning motor.
15Optical Gyroscopes
- Principle first discussed by Sagnac (1913).
- First ring laser gyro (1986) used He-Ne laser.
- Fiber optic gyros (1993) installed in Japanese
automobiles in the 90s. - The basic device
- two laser beams traveling in opposite directions
(i.e. counter-propagating) around a closed loop
path.
Standing wave created by counter-propagating
light beams. Schulz-DuBois idealization model
- constructing and destructive interference
patterns - can be formed by splitting off and mixing a
portion of the two beams. - used to determine the rate and direction of
rotation of the device.
16 Active Ring-Laser Gyro
- Introduces light into the doughnut by filling the
cavity with an active lasing medium. - Measures the change in path length ?L as function
of the angular velocity of rotation ?, radius of
the circular beam path r and speed of light c. -
- ?L
2
4?r ?
Sagnac effect
c
- For lasing to occur in a resonant cavity, the
round trip beam path must precisely equal in
length to an integral number of wavelengths at
the resonant frequency. - The frequencies of the two counter-propagating
waves must change, as only oscillations with
wavelength satisfying the resonance condition can
be sustained in the cavity.
17 Active Ring-Laser Gyro
- For an arbitrary cavity geometry with an area A
enclosed by the loop beam path and perimeter of
the beam path L - ?f
4A?
L?
- The fiber glass forms an internally reflective
waveguide for optical energy. - Multiple turns of fiber may be an implementation
of doughnut shape cavity and result with path
change due to Sagnac effect, essentially
multiplied by N, number of turns.
18 Open Loop Interferometer Fiber Optic GyroIFOG
n reflective index
Speed of light in medium
As long as the entry angle is less than a
critical angle, the ray is guided down the
fiber virtually without loss.
NA is the numerical aperture of
the fiber
index of reflection of cladding
index of reflection of glass core
We need a single mode fiber, so only the
counter-propagating waves can exist. But in such
a fiber light may randomly change polarization
states. So, we need a special polarization-maintai
ning fiber.
19 Open Loop IFOG
Is the number of fringes of phase shift due to
gyro rotation
- Advantages
- reduced manufacturing costs
- quick start-up
- good sensitivity
- Disadvantages
- long length of optical fiber required
- limited dynamic range in comparison with active
ring-laser gyros - scale factor variations
Used in automobile navigation, pitch and roll
indicators, and altitude stabilization.
20Resonant Fiber - Optic Gyros.
- Evolved as a solid state derivative of the
passive ring gyro, which makes use of a laser
source external to the ring cavity. - A passive resonant cavity is formed from a
multi-turn closed loop of optical fiber. - Advantages high reliability, long life, quick
start-up, light weight, up to 100 times less
fiber.
- Input coupler injects frequency modulated light
in both directions. - In the absence of loop rotation, maximum
coupling occurs at the resonant frequency. - If the loop rotates the resonant frequency must
shift.
?f
D
?
?n
21Ranging
- Distance measurement techniques
- triangulation
- ToF time of flight (pulsed)
- PhS phase shift measurement (CW continuous
wave) - FM frequency modulation (CW)
- interferometry
- Non contact ranging sensors
- active
- Radar - ToF, PhS, FM
- Sonar - ToF speed of sound slow in water.
- Lidar - laser based ToF, PhS
- passive
22GPS Navstar Global Positioning System
- 24 satellite based system, orbiting the earth
every 12h at an altitude of 10,900nm. - 4 satellites located in each of 6 planes
inclining 55deg. with respect to earths equator. - Absolute 3D location of any GPS receiver is
determined by trilateration techniques based on
time of flight for uniquely coded spread -
spectrum radio signals transmitted by the
satellites. - Problems
- time synchronization and the theory of
relativity. - precise real time location of satellites.
- accurate measurement of signal propagation time.
- sufficient signal to noise ratio
23GPS Navstar Global Positioning System
- spread - spectrum technique each satellite
transmits a periodic pseudo random code on two
different L band frequencies (1575.42 and 1227.6
MHz) - Solutions
- time synchronization. ? atomic clocks
- precise real time location of satellites. ?
individual satellite clocks are monitored by
dedicated ground tracking stations and
continuously advised of their measurement offsets
from official GPS time. - accurate measurement of signal propagation time.
? a pseudo-random code is modulated onto the
carrier frequencies. An identical code is
generated at the receiver on the ground. The time
shift is calculated from the comparison using the
forth satellite.
24GPS Navstar Global Positioning System
- The accuracy of civilian GPS is degraded 300m,
but there are quite a few commercial products,
which significantly enhance the above mentioned
accuracy. - The Differential GPS (DGPS) concept is based on
the existence of a second GPS receiver at a
precisely surveyed location.
- We assume that the same correction apply to both
locations. - Position error may be reduced well under 10m.
- Some other up-to-date commercial products claim
on accuracy of several cm.
25- Compact Outdoor Multipurpose POSE (Position and
Orientation Estimation) Assessment Sensing System
(COMPASS) - COMPASS is a flexible suite of sensors and
software integrated for GPS and INS navigation. - COMPASS consists of a high-accuracy, 12-channel,
differential Global Positioning System (GPS) with
an integrated Inertial Navigation System (INS)
and Land Navigation System (LNS). - This GPS/INS/LNS is being integrated with
numerous autonomous robotic vehicles by Omnitech
for military and commercial applications. - COMPASS allows semiautonomous
- operation with multiple configurations
- available.
26Triangulation
- Active employing
- a laser source illuminating the target object
and - a CCD camera.
Calibration targets are placed at known distances
z1 and z2.
Point- source illumination of the image
effectively eliminates the correspondence problem.
27Triangulation by Stereo vision
- Based on the Law of Sines, assuming the
measurement is done between three coplanar
points. - Passive Stereo vision measured angles (?,?) from
two points (P1,P2) located at known relative
distance (A).
- Limiting factors
- reduced accuracy with increasing range.
- angular measurement errors.
- may be performed only in the stereo observation
window, because of missing parts/ shadowing
between the scenes.
28Triangulation by Stereo vision
- Horopter is the plane of zero disparity.
- Disparity is the displacement of the image as
shifted between the two scenes. - Disparity is inversely proportional with the
distance to the object.
- Basic steps involved in stereo ranging process
- a point in the image of one camera must be
identified. - the same point must be located in the image of
the other camera. - the lateral position of both points must be
measured with respect to a common reference. - range Z is then calculated from the disparity in
the lateral measurements.
29Triangulation by Stereo vision
- Correspondence is the procedure to match two
images. - Matching is difficult in regions where the
intensity of color is uniform. - Shadows in only one image.
- Epipolar restriction - reduce the 2D search to a
single dimension. - The epipolar surface is a plane
- defined by the lens center
- points L and R and the object
- of interest at P.
30MIT Near IR Ranging
- One dimensional implementation.
- Two identical point source LEDs placed a known
distance d apart. - The incident light is focused on the target
surface. - The emitters are fired in sequence.
- The reflected energy is detected by a
phototransistor. - Since beam intensity is inversely proportional
with the distance traveled
Assumption the surface is perfectly defusing the
reflected light (Lambertian surface) and the
target is wider than the field of view.
31Basics of Machine Vision
32Vision systems are very complex.
- Focus on techniques for closing the loop in
robotic mechanisms.
- How might image processing be used to direct
behavior of robotic systems?
- Percept inversion what must be the world model
to produce the sensory stimuli? - Static Reconstruction Architecture
- Task A A stereo pair is used to reconstruct the
world geometry
33Reconstructing the image is usually not the
proper solution for robotic control.
- Examples where reconstructing the image is a
proper step - medical imagery
- construct topological maps
- Perception was considered in isolation.
- Elitism made vision researchers considering
mostly their closed community interests. - Too much energy invested in building World
Models.
34Active Perception Paradigm.
- Task B a mobile robot must navigate across
outdoor terrain.
- Many of the details are likely to be irrelevant
in this task. - The responsiveness of the robot depends on how
precisely it focuses on just the right visual
feature set.
35Perception produces motor control outputs, not
representations.
- Action oriented perception.
- Expectation based perception.
- Focus on attention.
- Active perception agent can use motor control
to enhance perceptional processing.
36Cameras as sensors.
- Light, scattered from objects in the
environment is projected through a lens system on
the image plane.
- Information about the incoming light (e.g.,
intensity, color) is detected by photosensitive
elements build from silicon circuits in
charge-coupled devices (CCD) placed on the image
plane. - In machine vision, the computer must make sense
out of the information it gets on the image
plane. - The lens focus the incoming light.
37Cameras as sensors.
- Only the objects at a particular range of
distances from the lens will be in focus. This
range of distances is called the camera's depth
of field. - The image plan is subdivided into pixels,
typically arranged in a grid (512x512) - The projection on the image plan is called image.
- Our goal is to extract information about the
world from a 2D projection of the energy stream
derived from a complex 3D interaction with the
world.
38Pinhole camera model.
Perspective projection geometry.
Mathematically equivalent non-inverting geometry.
39????? ???? ???? P ?? ??? ??? ??????
P
???? P ???? ???? l ???? ????? 1 ?????? ???? a
???? ?????
???? P ???? ???? r ???? ????? 2 ?????? ???? b
???? ?????
d
l
r
lr ????? ??? ???????
f
f
a
b
b
ab ???? disparity
????? 1
????? 2
d/l (df)/(la)
d f (lr)/(ab)
d/r (df)/(rb)
????? ???? P ???? ???? ???? ? disparity
40A simple example stereo system encodes depth
entirely in terms of disparity.
2d distance between cameras
disparity
z distance to object
41Geometrical parameters for binocular imaging.
- The information needed to reconstruct the 3D
geometry includes also - the kinematical configuration of the camera and
- the offset from image center (optical
distortions).
42????? ?? a 2
??? ?????? ??????? ???? ?? ???? ??? P.
- ????? ??? ??????? 2d ?????? ?????.
- ??? ?? ??????? ????? ??? ??? ???????? ?-
- ??? ?? ?????? ????? ??? ??????.
- ?????
- ?????????? ?? ????? ??????? (?????? ??????)
- ???????? ??????? ????? ????? ???? ????? ??????.
P
43Geometrical parameters for binocular imaging.
- The information needed to reconstruct the 3D
geometry includes also - the kinematical configuration of the camera and
- the offset from image center (optical
distortions).
The solution is
44Edge detection
- The brightness of each pixel in the image is
proportional to the amount of light directed
toward the camera by the surface patch of the
object that projects to that pixel. - Image of a black and white camera
- collection of 512x512 pixels, with different
gray levels (brightness). - to find an object we have to find its edges do
edge detection. - we define edges as curves in the image plane
across which there is significant change in the
brightness. - the edge detection is performed in two-steps
- detection of edge segments/ elements called
edgels - aggregation of adgels.
- because of noise (all sorts of spurious peaks),
we have to do first smoothing.
45Smoothing How do we deal with noise?
- Convolution.
- Applies a filter a mathematical procedure, which
finds and eliminated isolated picks.
- Convolution, is the operation of computing the
weighted integral of one function with respect to
another function that has - first been reflected about the origin, and
- then variably displaced.
- In one continuous dimension, h(t) i(t) ?
h(t-?) i(?) d? ? h(?) i(t-?) d?
Graphical Convolution - Discrete Time
- Continuous Time
46How this is done?
- Integral or differential operators acting on
pixels are achieved by a matrix of multipliers
that is applied to each pixel as is moved across
the image. - They are typically moved from left to right as
you would read a book.
Sobel gradient
Sobel Laplacian
47Examples of Convolution operators as filters at
work.
48Some mathematical background.The Dirac delta
function.
49Fourier Transform
50Fourier Transform Pairs.
51The Shift theorem.
52The convolution theorem.
An important property of the convolution lies in
the way in which it maps through the Fourier
transform Convolution in the spatial domain is
equivalent to multiplication in the frequency
domain and vice versa.
Convolution operators are essentially spectral
filters.
53The Sampling Theorem.
f(x) is continuous spatial function representing
the image function. g(x) is an infinite sequence
of Dirac delta operators. The product of this two
is h(x), the sampled approximation.
Using the convolution theorem, the Fourier
transforms
The frequency spectrum of the sampled image
consists of duplicates of the spectrum of the
original image distributed at frequency
intervals.
54(No Transcript)
55The Sampling Theorem. Nyquist Theorem.
An illustrative example
When replicated spectra interfere, the crosstalk
introduces energy at relatively high
frequencies changing the appearance of the
reconstructed image.
The Sampling Theorem
If the image contains no frequency components
greater than one half of the sampling frequency,
than the continuous image is faithfully
represented by the sampled image.
56Early Processing
The convolution operation of a continuous
function of a 2D signal
For discrete functions, the equivalent operation
is
or
- where, h(x,y) is a new image generated by
convoluting the image g(x,y) with the (2n1)x
(2n1) convolution mask f(i,j). - for n1 the operator is 3x3.
- this is convenient because it allows a
convoluting process, where the response h(x,y)
depends on the neighborhood of support of the
original image g(x,y) according to the
convolution operator f(x,y)
57Edge Detection
- locate sharp changes in the intensity function
- edges are pixels where brightness changes
abruptly. - Calculus describes changes of continuous
functions using derivatives an image function
depends on two variables - partial derivatives. - A change of the image function can be described
by a gradient that points in the direction of the
largest growth of the image function.
58Edge Detector
- An edge is a property attached to an individual
pixel and is calculated from the image function
behavior in a neighborhood of the pixel. - It is a vector variable
- magnitude of the gradient
- direction
59Edge Detectors
- The gradient direction gives the direction of
maximal growth of the function, e.g., from black
(f(i,j)0) to white (f(i,j)255).
60- Edges are often used in image analysis for
finding region boundaries. - Boundary and its parts (edges) are perpendicular
to the direction of the gradient.
61Gradient
- A digital image is discrete in nature ...
derivatives must be approximated by differences - n is a small integer, usually 1.
62- Gradient operators can be divided into three
categories - I. Operators approximating derivatives of the
image function using differences. - rotationally invariant (e.g., Laplacian) need one
convolution mask only. - approximating first derivatives use several masks
... the orientation is estimated on the basis of
the best matching of several simple patterns. - II. Operators based on the zero crossings of the
image function second derivative.
63Edge Detection OperatorsLaplace Operator
- Laplace Operator (magnitude only)
- The Laplacian is approximated in digital images
by a convolution sum. - A 3 x 3 mask for 4-neighborhoods and
8-neighborhood
64Compare
65Commonly used gradient operators that accomplish
smoothing and differentiation simultaneously -
smoothing by averaging the gradient computation
over several rows or columns, and -
differentiation by the finite difference operator.
66Roberts Operator
- So the magnitude of the edge is computed as
- disadvantage of the Roberts operator is its high
sensitivity to noise
67Compare
68Prewitt operator
- The gradient is estimated in eight possible
directions, and the convolution result of
greatest magnitude indicates the gradient
direction..
69Compare
70Sobel Operator
- Used as a simple detector of horizontality and
verticality of edges in which case only masks h1
and h3 are used.
71Edge Sobel Operator
- If the h1 response is y and the h3 response x, we
might then derive edge strength (magnitude) as
- and direction as arctan (y/x).
72Compare
73Edge Operators
- Robinson operator
- Kirsch operator
-
74Zero crossings of the second derivative
- The main disadvantage of these edge detectors is
their dependence on the size of objects and
sensitivity to noise. - An edge detection technique, based on the zero
crossings of the second derivative explores the
fact that a step edge corresponds to an abrupt
change in the image function. - The first derivative of the image function should
have an extreme at the position corresponding to
the edge in the image, and so the second
derivative should be zero at the same position.
75Edge Sharpening.
One way of accomplishing detection of faint edges
while maintaining precision at strong edges is to
require that the second derivative be near zero,
while the first derivative be above some
threshold.
The Laplacian operator approximates the second
derivative of the image function
Identifying the inflection point in the intensity
function.
76Laplacian of the Gaussian (LoG)
- Robust calculation of the 2nd derivative
- smooth an image first (to reduce noise) and then
compute second derivatives. - the 2D Gaussian smoothing operator G(x,y)
77LoG
- After returning to the original co-ordinates x, y
and introducing a normalizing multiplicative
coefficient c (that includes ), we get a
convolution mask of a zero crossing detector - where c normalizes the sum of mask elements to
zero.
78LoG
Prewitt
Sobel
Roberts
79Edge Detection Intensity Gradients.
80Segmentation.
- try to find objects among all those edges.
- Segmentation is the process of dividing up or
organizing the image into parts that correspond
to continuous objects. - how do we know which lines correspond to which
objects?
- Model based vision store models of line
drawings of objects and then compare with all
possible combinations of edges (many possible
angles, different scale, ) - Motion vision compare two consecutive images,
while moving the camera.
- Each continuous object will move.
- The brightness of any object will be conserved.
- Subtract the images.
81Segmentation.
- how do we know which lines correspond to which
objects?
- Stereo based model like motion vision, but use
the disparity. - Texture areas that have uniform texture are
consistent, and have almost identical brightness,
so we can assume they come from the same object. - Use shading and contours.
- All these methods have been studied extensively
and come out to be a very difficult task. - Alternatively we can do object recognition.
Simplify by - use color.
- use small image plan.
- use simpler sensors than vision, or combine
information from different sensors sensor
fusion. - use information about the task Active
perception paradigm.