Title: Off-the-Shelf Vision-Based Mobile Robot Sensing
1Off-the-Shelf Vision-Based Mobile Robot Sensing
- Zhichao Chen
- Advisor Dr. Stan Birchfield
- Clemson University
2Vision in Robotics
- A robot has to perceive its surroundings in order
to interact with it. - Vision is promising for several reasons
- Non-contact (passive) measurement
- Low cost
- Low power
- Rich capturing ability
3Project Objectives
Path following Traverse a desired trajectory in
both indoor and outdoor environments. 1.
Qualitative vision-based mobile robot
navigation, Proceedings of the IEEE
International Conference on Robotics and
Automation (ICRA), 2006. 2. Qualitative
vision-based path following, IEEE Transactions
on Robotics, 25(3)749-754, June 2009.
Person following Follow a person in a cluttered
indoor environment. Person Following with a
Mobile Robot Using Binocular Feature-Based
tracking, Proceedings of the IEEE International
Conference on Intelligent Robots and Systems
(IROS), 2007
Door detection Build a semantic map of the
locations of doors as the robot drives down a
corridor. Visual detection of lintel-occluded
doors from a single camera, IEEE Computer
Society Workshop on Visual Localization for
Mobile Platforms (in association with CVPR),2008,
4Motivation for Path Following
- Goal Enable mobile robot to follow a desired
trajectory in both indoor and outdoor
environments - Applications courier, delivery, tour guide,
scout robots - Previous approaches
- Image Jacobian Burschka and Hager 2001
- Homography Sagues and Guerrero 2005
- Homography (flat ground plane) Liang and Pears
2002 - Man-made environment Guerrero and Sagues 2001
- Calibrated camera Atiya and Hager 1993
- Stereo cameras Shimizu and Sato 2000
- Omni-directional cameras Adorni et al. 2003
5Our Approach to Path Following
- Key intuition Vastly overdetermined
system(Dozens of feature points, one control
decision) - Key result Simple control algorithm
- Teach / replay approach using sparse feature
points - Single, off-the-shelf camera
- No calibration for camera or lens
- Easy to implement (no homographies or Jacobians)
6Preview of Results
milestone image
current image
top-down view
overview
7Tracking Feature Points
- Kanade-Lucas-Tomasi (KLT) feature tracker
- Automatically selects features using eigenvalues
of 2x2 gradient covariance matrix - Automatically tracks features by minimizing sum
of squared differences (SSD) between consecutive
image frames - Augmented with gain and bias to handle lighting
changes - Open-source implementation
gradient of image
unknown displacement
gray-level images
http//www.ces.clemson.edu/stb/klt
8Teach-Replay
track features
detect features
destination
Teaching Phase
start
compare features
track features
Replay Phase
9Qualitative Decision Rule
Landmark
feature
image plane
Robot at goal
uGoal
uCurrent
funnel lane
No evidenceGo straight
Feature is to the right uCurrent gt uGoal?
Turn right
Feature has changed sides sign(uCurrent) ?
sign(uGoal) ? Turn left
10The Funnel Lane at an Angle
Landmark
feature
image plane
Robot at goal
funnel lane
No evidenceGo straight
Feature is to the right? Turn right
Side change? Turn left
11A Simplified Example
Landmark
feature
Robot at goal
funnel lane
funnel lane
funnel lane
funnel lane
Go straight
Go straight
Go straight
Turn right
Turn left
Go straight
12The funnel Lane Created by Multiple Feature Points
Landmark 2
Landmark 1
Landmark 3
a
a
Feature is to the right? Turn right
No evidenceDo not turn
Side change? Turn left
13Qualitative Control Algorithm
Funnel constraints
uGoal
Desired heading
where f is the signed distance between the uC and
uD
14Incorporating Odometry
Desired heading
Desired heading from odometry
Desired heading from ith feature point
N is the number of the features
15Overcoming Practical Difficulties
To deal with rough terrain Prior to comparison,
feature coordinates are warped to compensate for
a non-zero roll angle about the optical axis by
applying the RANSAC algorithm.
To avoid obstacles The robot detects and avoids
an obstacle by sonar, and the odometry enables
the robot to roughly return to the path. Then
the robot converges to the path using both
odometry and vision.
16Experimental Results
milestone image
current image
top-down view
overview
Videos available at http//www.ces.clemson.edu/st
b/research/mobile_robot
17Experimental Results
milestone image
current image
top-down view
overview
Videos available at http//www.ces.clemson.edu/st
b/research/mobile_robot
18Experimental Results Rough Terrain
19Experimental ResultsAvoiding an Obstacle
20Experimental Results
Indoor
Outdoor
Imaging Source Firewire camera
Logitech Pro 4000 webcam
21Project Objectives
Path following Enable mobile robot to follow a
desired trajectory in both indoor and outdoor
environments. 1. Qualitative vision-based mobile
robot navigation, Proceedings of the IEEE
International Conference on Robotics and
Automation (ICRA), 2006. 2. Qualitative
vision-based path following, IEEE Transactions
on Robotics, 2009
Person following Enable a mobile robot to follow
a person in a cluttered indoor environment by
vision. Person Following with a Mobile Robot
Using Binocular Feature-Based tracking,
Proceedings of the IEEE International Conference
on Intelligent Robots and Systems (IROS), 2007
Door detection Detect doors as the robot drives
down a corridor. Visual detection of
lintel-occluded doors from a single camera, IEEE
Computer Society Workshop on Visual Localization
for Mobile Platforms (in association with
CVPR),2008
22Motivation
- Goal Enable a mobile robot to follow a person in
a cluttered indoor environment by vision. - Previous approaches
- Appearance properties color, edges. Sidenbladh
et al. 1999, Tarokh and Ferrari 2003, Kwon et
al. 2005 - Person has different color from background or
faces camera. - Lighting changes.
- Optical flow. Piaggio et al 1998, Chivilò et
al. 2004 - Drift as the person moves with out-of-plane
rotation - Dense stereo and odometry. Beymer and Konolige
2001 - difficult to predict the movement of the robot
(uneven surfaces, slippage in the wheels).
23Our approach
- Algorithm Sparse stereo based on Lucas-Kanade
feature tracking. - Handles
- Dynamic backgrounds.
- Out-of-plane rotation.
- Similar disparity between the person and
background. - Similar color between the person and background.
24System overview
25Detect 3D features of the scene ( Cont. )
- Features are selected in the left image IL and
matched in the right image IR.
Left image
Right image
26System overview
27Detecting Faces
- The Viola-Jones frontal face detector is applied.
- This detector is used both to initialize the
system and to enhance robustness when the person
is facing the camera.
Note The face detector is not necessary in our
system.
28Overview of Removing Background
1) using the known disparity of the person in the
previous image frame.
2) using the estimated motion of the background.
3) using the estimated motion of the person
29Remove BackgroundStep 1 Using the known
disparity
- Discard features for which
- where is the known disparity of the
person in the previous frame, - and is the disparity of a feature at time
t .
Original features
30Remove BackgroundStep 2 Using background
motion
- Estimate the motion of the background by
computing a 4 4 affine transformation matrix H
between two image frames at times t and t 1
- Random sample consensus (RANSAC) algorithm is
- used to yield dominant motion.
31Remove BackgroundStep 3 Using person motion
- Similar to step 2, the motion model of the person
is calculated. - The size of the person group should be the
biggest. - The centroid of the person group should be
proximate to the previous location of the person.
Foreground features after step 2
Foreground features after step 3
32System overview
33System overview
34Experimental Results
35Video
36Project Objectives
Path following Enable mobile robot to follow a
desired trajectory in both indoor and outdoor
environments. 1. Qualitative vision-based mobile
robot navigation, Proceedings of the IEEE
International Conference on Robotics and
Automation (ICRA), 2006. 2. Qualitative
vision-based path following, IEEE Transactions
on Robotics, 2009
Person following Enable a mobile robot to follow
a person in a cluttered indoor environment by
vision. Person Following with a Mobile Robot
Using Binocular Feature-Based tracking,
Proceedings of the IEEE International Conference
on Intelligent Robots and Systems (IROS), 2007
Door detection Detect doors as the robot drives
down a corridor. Visual detection of
lintel-occluded doors from a single camera, IEEE
Computer Society Workshop on Visual Localization
for Mobile Platforms (in association with
CVPR),2008
37Motivation for Door Detection
Topological map
Metric map
Either way, doors are semantically meaningful
landmarks
38Previous Approaches to Detecting Doors
Range-based approaches sonar Stoeter et
al.1995, stereo Kim et al. 1994, laser
Anguelov et al. 2004 Vision-based approaches
fuzzy logic Munoz-Salinas et al. 2004
color segmentation Rous et al. 2005
neural network Cicirelli et al 2003
- Limitations
- require different colors for doors and walls
- simplified environment (untextured floor, no
reflections) - limited viewing angle
- high computational load
- assume lintel (top part) visible
39What is Lintel-Occluded?
- Lintel-occluded
- post-and-lintel architecture
- camera is low to ground
- cannot point upward b/c obstacles
lintel
post
40Our Approach
Assumptions Both door posts are visible
Posts appear nearly vertical The door is at
least a certain width
Key idea Multiple cues are necessary for
robustness (pose, lighting, )
41Video
42Pairs of Vertical Lines
vertical lines
detected lines
Canny edges
non-vertical lines
- Edges detected by Canny
- Line segments detected by modified
Douglas-Peucker algorithm - Clean up (merge lines across small gaps, discard
short lines) - Separate vertical and non-vertical lines
- Door candidates given by all the vertical line
pairs whose spacing is within a given range
43Homography
In the image
In the world
(x, y)
(x, y)
44Prior Model Features Width and Height
Principal point
45An Example
As the door turns, the bottom corner traces an
ellipse (projective transformation of circle is
ellipse) But not horizontal
46Data Model (Posterior) Features
Placement of top and bottom edges (g2 , g3)
Image gradient along edges (g1)
Color (g4)
Vanishing point (g7)
Kick plate (g6)
texture (g5)
and two more
47Data Model Features (cont.)
Intensity along the line
darker (light off)
positive
brighter (light on)
negative
no gap
Bottom gap(g8)
48Data Model Features (cont.)
Slim U
vertical door lines
wall
wall
door
Lleft
bottom door edge
intersection line of wall and floor
extension of intersection line
LRight
e
floor
Concavity(g9)
49Two Methods to Detect Doors
Training images
Adaboost
Weights of features
Weights of features
The strong classifier
Bayesian formulation
(yields better results)
50Bayesian Formulation
Taking the log likelihood,
Data model
Prior model
51MCMC and DDMCMC
- Markov Chain Monte Carlo (MCMC) is used here to
maximize probability to detect door (like random
walk through state space of doors) - Data driven MCMC (DDMCMC) is used to speed up
computation - doors appear more frequently at the position
close to the vertical lines - the top of the door is often occluded or a
horizontal line closest to the top - the bottom of the door is often close to the
wall/floor boundary.
52Experimental Results Similar or Different
Door/Wall Color
53Experimental Results High Reflection / Textured
Floors
54Experimental Results Different Viewpoints
55Experimental Results Cluttered Environments
56Results
- 25 different buildings
- 600 images
- 100 training
- 500 testing
- 91.1 accuracy with
- 0.09 FP per image
Speed 5 fps 1.6GHz (unoptimized)
57False Negatives and Positives
strong reflection
concavity and bottom gap tests fail
distracting reflection
two vertical lines unavailable
concavity erroneously detected
distracting reflection
58Navigation in a Corridor
- Doors were detected and tracked from frame to
frame. - Fasle positives are discarded if doors were not
repeatedly detected.
59Video
60Conclusion
- Path following
- Teach-replay, comparing image coordinates of
feature points (no calibration) - Qualitative decision rule (no Jacobians,
homographies) - Person following
- Detects and matches feature points between a
stereo pair of images and between successive
images. - RANSAC-based procedure to estimate the motion of
each region - Does not require the person to wear a different
color from the background. - Door detection
- Integrate a variety of features of door
- Adaboost training and DDMCMC.
61Future Work
- Path following
- Incorporating higher-level scene knowledge to
enable obstacle avoidance and terrain
characterization - Connecting multiple teaching paths in a
graph-based framework to enable autonomous
navigation between arbitrary points. - Person following
- Fusing the information with additional
appearance-based information ( template or edges)
. - Integration with EM tracking algorithm.
- Door detection
- Calibrate the camera to enable pose and distance
measurements to facilitate the building of a
geometric map. - Integrated into a complete navigation system that
is able to drive down a corridor and turn into a
specified room