Title: Real-Time Vision on a Mobile Robot Platform
1Real-Time Vision on a Mobile Robot Platform
- Mohan Sridharan
- Joint work with Peter Stone
- The University of Texas at Austin
- smohan_at_ece.utexas.edu
2Motivation
- Computer vision challenging.
- State-of-the-art approaches not applicable to
real systems. - Computational and/or memory constraints.
- Focus efficient algorithms that work in
real-time on mobile robots.
3Overview
- Complete vision system developed on a mobile
robot. - Challenges to address
- Color Segmentation.
- Object recognition.
- Line detection.
- Illumination invariance.
- On-board processing computational and memory
constraints.
4Test Platform Sony ERS7
- 20 degrees of freedom.
- Primary sensor CMOS camera.
- IR, touch sensors, accelerometers.
- Wireless LAN.
- Soccer on 4.5x3m field play humans by 2050!
5The Aibo Vision System I/O
- Input Image pixels in YCbCr Color space.
- Frame rate 30 fps.
- Resolution 208 x 160.
- Output Distances and angles to objects.
- Constraints
- On-board processing 576 MHz.
- Rapidly varying camera positions.
-
6Robots view of the world
7Vision System Flowchart
8Vision System Phase 1 Segmentation.
- Color Segmentation
- Hand-label discrete colors.
- Intermediate color maps.
- NNr weighted average Master color cube.
- 128x128x128 color map 2MB.
9Vision System Phase 1 Segmentation.
- Use perceptually motivated color space LAB .
- Offline training in LAB generate equivalent
YCbCr cube.
10Vision System Phase 1 Segmentation.
11Vision System Phase 1 Segmentation.
- Use perceptually motivated color space LAB.
- Offline training in LAB generate equivalent
YCbCr cube. - Reduce problem to table lookup.
- Robust performance with shadows, highlights.
- YCbCr 82, LAB 91.
12Sample Images Color Segmentation.
13Sample Video Color Segmentation.
14Some Problems
- Sensitive to illumination.
- Frequent re-training.
- Robot needs to detect and adapt to change.
- Off-board color labeling time consuming.
- Autonomous color learning possible
15Vision System Phase 2 Blobs.
- Run-Length encoding.
- Starting point, length in pixels.
- Region Merging.
- Combine run-lengths of same color.
- Maintain properties pixels, runs.
- Bounding boxes.
- Abstract representation four corners.
- Maintains properties for further analysis.
16Sample Images Blob Detection.
17Vision System Phase 2 Objects.
- Object Recognition.
- Heuristics on size, shape and color.
- Previously stored bounding box properties.
- Domain knowledge.
- Remove spurious blobs.
- Distances and angles known geometry.
18Sample Images Objects.
19(No Transcript)
20Vision System Phase 3 Lines.
- Popular approaches Hough transform, Convolution
kernels computationally expensive. - Domain knowledge.
- Scan lines green-white transitions candidate
edge pixels.
21Vision System Phase 3 Lines.
- Incremental least square fit for lines.
- Efficient and easy to implement.
- Reasonably robust to noise.
- Lines provide orientation information.
- Line Intersections can be used as markers.
- Inputs to localization.
- Ambiguity removed through prior position
knowledge.
22Sample Images Objects Lines.
23Some Problems
- Systems needs to be re-calibrated
- Illumination changes.
- Natural light variations day/night.
- Re-calibration very time consuming.
- More than an hour spent each time
- Cannot achieve overall goal play humans.
- That is not happening anytime soon, but still
24Illumination Sensitivity Samples.
- Trained under one illumination
- Under different illumination
25Illumination Sensitivity Movie
26Illumination Invariance - Approach.
- Three discrete illuminations bright,
intermediate, dark. - Training
- Performed offline.
- Color map for each illumination.
- Normalized RGB (rgb use only rg) sample
distributions for each illumination.
27Illumination Invariance Training.
- Illumination bright color map
28Illumination Invariance Training.
- Illumination bright map and distributions.
29Illumination Invariance Training.
30Illumination Invariance Testing.
31Illumination Invariance Testing.
32Illumination Invariance Testing.
33Illumination Invariance Testing.
2
34Illumination Invariance Testing.
- Testing - KLDivergence as a distance measure
- Robust to artifacts.
- Performed on-board the robot, about once a
second. - Parameter estimation described in the paper.
- Works for conditions not trained for
- Paper has numerical results.
35Adapting to Illumination changes Video
36Some Related Work
- CMU vision system Basic implementation.
- James Bruce et al., IROS 2000
- German Team vision system Scan Lines.
- Rofer et al., RoboCup 2003
- Mean-shift Color Segmentation.
- Comaniciu and Peer PAMI 2002
37Conclusions
- A complete real-time vision system on board
processing. - Implemented new/modified version of vision
algorithms. - Good performance on challenging problems
segmentation, object recognition and illumination
invariance.
38Future Work
- Autonomous color learning.
- AAAI-05 paper available online.
- Working in more general environments, outside the
lab. - Automatic detection of and adaptation to
illumination changes. - Still a long way to go to play humans ?.
39Autonomous Color Learning Video
- More videos online
- www.cs.utexas.edu/AustinVilla/
40THATS ALL FOLKS ?
www.cs.utexas.edu/AustinVilla/
41(No Transcript)
42Question 1 So, what is new??
- Robust color space for segmentation.
- Domain-specific object recognition line
detection. - Towards illumination invariance.
- Complete vision system closed loop.
- Accept cannot compare with other teams, but
overall performance good at competitions
43Vision 1 Why LAB??
- Robust color space for segmentation.
- Perceptually motivated.
- Tackles minor changes shadows, highlights.
- Used in robot rescue
44Vision 2 Edge pixels Least Squares??
- Conventional approaches time consuming.
- Scan lines faster
- Reduces colors needing bounding boxes.
- LS easier to implement fast too.
- Accept have not compared with any other method
45Vision 3 Normalized RGB ??
- YCbCr separates luminance but not good for
practice on Aibo. - Normalized RGB (rgb)
- Reduces number of dimensions - storage.
- More robust to minor variations.
- Accept have compared with YCbCr alone LAB
works but more storage and calculations
46Illumination Invariance Training.
47Illumination Invariance Testing.