Autonomous Navigation and Mapping Using Monocular LowResolution Grayscale Vision - PowerPoint PPT Presentation

About This Presentation
Title:

Autonomous Navigation and Mapping Using Monocular LowResolution Grayscale Vision

Description:

Autonomous Navigation and Mapping Using Monocular LowResolution Grayscale Vision – PowerPoint PPT presentation

Number of Views:85
Avg rating:3.0/5.0
Slides: 45
Provided by: stude647
Learn more at: http://cecas.clemson.edu
Category:

less

Transcript and Presenter's Notes

Title: Autonomous Navigation and Mapping Using Monocular LowResolution Grayscale Vision


1
Autonomous Navigation and Mapping Using Monocular
Low-Resolution Grayscale Vision
  • Vidya Murali
  • M.S. thesis defense
  • Department of Electrical and Computer Engineering
  • Clemson University
  • Clemson, SC 29634

2
Goal
  • Three-fold goal
  • Autonomous exploration
  • Mapping
  • Localization
  • Applications Manufacturing industry, military,
    security, consumer, entertainment.
  • SLAM Manual/tele-operated mode
  • Autonomous exploration and map-building

SLAM
3
Low-resolution monocular vision as the sensor
  • Vision
  • Non-intrusive
  • More information for scene interpretation
  • Inexpensive, standard off-the-shelf
  • Monocular vision
  • No calibration
  • Single forward facing
  • Low-resolution (32 x 24 grayscale)
  • Selective degradation hypothesis (Leibowitz)
  • Guidance low-resolution
  • Recognition high-resolution
  • Object detection and classification
  • Computational efficiency

160 x 120
80 x 60
32 x 24
40 x 30
320 x 240
4
Preliminary result
Navigation in Riggs floor 1
Voronoi based map of Riggs floor 1
5
Algorithm Overview
6
Ceiling Lights
Mean of bright pixels
x
Ceiling lights yield rotation and
translation Vanishing points yield only
orientation
Rotational velocity of robot K(lmean w/2)
  • Previous work uses camera pointing to ceiling,
    teach-replay approach, shape of lights.

Riggs floor 1 (sodium vapor)
Riggs basement (Fluorescent)
Riggs Floor 1 (two sides)
7
Entropy
  • Entropy - measure of information content

Entropy of the gray level histogram
8
Low entropy
Entropy drops sharply while facing blank walls,
doors.
9
High Entropy
Open corridor
Open corridor
Plot of entropy and distance values (measured by
SICK laser scanner) as the robot turns at a
T-junction in EIB and Lowry with corresponding
images below
10
Homing
  • When ceiling lights disappear, enter homing mode.
  • Servoing on a home image captured at the instance
    lights disappear.
  • In this mode Jeffrey divergence and
    time-to-collision are calculated till end is
    detected.

l
Left shift by 1 pixel
SAD
If l lt r
N
Y
Ihome
It
Turn left
SAD
Turn right
r
Right shift by 1 pixel
Ihome
11
Algorithm Overview
12
Jeffrey Divergence
  • Relative Entropy measure of how different one
    image is from another
  • Kullback Leibler (KL)
  • Jeffrey Divergence symmetric version of KL.
  • p - image histogram of first image
  • q - the image histogram of the second image
  • J - relative entropy between the two images.

13
Time To Collision
  • TTC time taken by the camera to reach the
    surface being viewed.
  • Brightness constancy
  • Camera moving such that optical axis is
    perpendicular to a planar surface
  • No calibration, no tracking.
  • Ex and Ey spatial image brightness derivatives
  • Et temporal image derivative
  • G xEx yEy

B. K. Horn, Y. Fang, and I. Masaki. Time to
contact relative to a planar surface. IEEE
Intelligent Vehicles Symposium, pages 6874, June
2007
14
Detecting the end of the corridor
Time - to - collision
Jeffrey Divergence
(J gt Jth) (TTC lt Tmin) (Entropy lt Hlow)
15
Algorithm Overview
16
Turning at the end of the corridor
  • Search for lights and high entropy, turning left
    by 90 then right.
  • Special case short corridor
  • Lights not visible
  • High entropy detected from -90 to 90 degrees
  • Entropy alone is sufficient for turning

No Lights High entropy
No Lights High entropy
17
Autonomous mapping
  • Voronoi based map
  • Links free path, safest route to navigate
  • Nodes landmarks

B.L Boada, D. Blanco and L. Moreno, Symbolic
Place Recognition in Voronoi-Based Maps by Using
Hidden Markov Models, Journal of Intelligent and
Robotic Systems 39173-197, 2004
18
Landmark metrics
  • Salient locations - Regions of distinction
  • Doors, water fountains, hallways, fire
    extinguishers and so on.
  • Distinct blob - region of high entropy and
    high relative entropy compared to previous frame
  • Only one sixth of image seen (on left and right)
    is considered for landmark detection

19
Joint Probability Density (JPD)
  • JPD is a combination of two measures
  • X Entropy of the current image
  • Y The relative entropy of the current image
    with respect to the previous image (Jeffrey
    Divergence)
  • Plotted as a function of time or frame number.

1
20
60
80
40
Spatial and temporal saliency
20
Landmarks
Local maxima Left landmarks
Local maxima Right landmarks
21
Experimental Setup
  • ActivMedia Pioneer P3AT robot with a single
    forward facing Logitech camera
  • Programming was done in VC , Blepo.
  • Experiments were conducted in Riggs, EIB, Lowry

22
Experimental Results Navigation in Riggs
Floor 1
Basement
Floor 3
Floor 2
23
Navigation performance
  • Success driving from one end of the corridor to
    the other, with manual start and stop
  • Same initial conditions
  • No dynamic obstacles
  • Same thresholds were used for Jeffrey divergence,
    entropy and TTC in all the floors.

24
Experimental results Mapping
Floor 3
25
Floor 3 Left Landmarks
26
Floor 3 Right Landmarks
27
Experimental results Mapping
Basement
Floor 1
Floor 2
Floor 3
28
Mapping Performance
Number of landmarks detected
Number of landmarks
Number of false landmarks
Missed landmarks
Right
Left
  • Affected by reflections (poor results in floor 2)
  • Affected by the position of the robot in the
    corridor if the robot moves close to a wall,
    landmarks may be missed or wrongly placed.
  • Large number of false positives

29
Odometry correction
Floor 1
Floor 3
The robots knowledge of its heading is updated
by
  • Only during the driving mode
  • Only heading was updated

tmodule is the time taken for processing one
iteration of a vision module in the driving
mode. ?v the output of the vision module
30
Robustness
Robot starts facing the right wall, floor 3
Robot starts very close to a wall, floor 3.
31
Repeatability
Plot of four trials in floor 3
Long trial of 45 mins in floor 3 (path was
measured by using markers manually)
32
Computational Efficiency
Frame rate achieved gt 1000 fps , 3 CPU time (30
Hz camera)
33
Video Floor 3
34
Limitations
  • Specular reflections affect both navigation and
    mapping
  • Glass doors cause failure to detect corridor
    ends.
  • Cannot navigate when indoor lights are not
    visible from forward facing camera

Lowry glass panel at top right
Riggs basement double glass door
EIB ceiling lights not visible from forwards
facing camera
35
Conclusion and Future work
  • We have developed an algorithm using low
    resolution vision that can
  • Autonomously navigate an unknown corridor with
    good repeatability
  • Create a Voronoi based map of the corridor with
    fair detection of landmarks
  • Future work
  • Apply learning to make each of the modules more
    robust
  • Use an alternative to ceiling lights, like
    ceiling symmetry
  • Improve the mapping technique
  • Localization using the map

36
(No Transcript)
37
Appendix
38
TTC Derivation
39
(No Transcript)
40
(No Transcript)
41
(No Transcript)
42
Video Floor 0
43
Video Floor 1
44
Video Floor 3 people
Write a Comment
User Comments (0)
About PowerShow.com