Title: Adaptive Intelligent Mobile Robots
1Adaptive Intelligent Mobile Robots
- Kevin Murphy
- PI Leslie Pack Kaelbling
- Artificial Intelligence Laboratory
- MIT
2Outline
- Towards a mobile vision system that knows where
it is what it is looking at - Brief overview of other projects
3Context-based vision system for place and object
recognition
- Antonio Torralba
- Kevin Murphy
- Bill Freeman
- Mark Rubin
- Submitted to ICCV 03
4Object out of context
5Object in context
6What is context?
- What kind of location? (indoors/outdoors,
office/corridor) - Which location?(Kevins office, Leslies office)
- Viewing direction (facing the window)
- Global scene factors (illumination)
- - Current activity (moving, sitting, talking)
7Wearable test-bed
8System diagram
9Computing the features
10Low-dimensional representation for scenes
- Compute image intensity (no color)
- Pipe image through steerable filter bank (6
orientations, 4 scales) - Compute magnitude of response
- Downsample to 4 x 4
- PCA to 80 dimensions
11Visualizing the filter bank output
Images
80-dimensional representation
12Place recognition system
13Hidden Markov Model
- Hidden states location (63 values)
- Observations vGt 2 R80
- Transition model encodes topology of environment
- Observation model is a mixture of Gaussians (100
views per place)
14Place recognition demo
15Performance on known env.
Ground truth
System estimate
Specific location
Location category
Indoor/outdoor
16Performance on new env.
17Comparison of features
Categorization
Recognition
18Effect of HMM on recognition
Without
With
19From place to object recognition
20Object priming
- Predict object properties based oncontext
(top-down signals) - Visual gist, vtG
- Specific Location, Qt
- Kind of location, Ct
- Assume objects are independent conditional on
context
21Predicting object presence
22ROC curves for object detection based on context
alone
23Predicting object position and scale
24Predicted segmentation
25Closing the loop
Integrate local features (bottom up likelihood)
with global features (top down prior)
26Future work
- Add local features (bottom-up signal) for object
detection/ localization - Model dependencies between objects
- Scale-up place recognition to campus
- Discriminative feature selection
- Use a head tracker (view angle)
- Recognize movemes (motion clips)
- Online, unsupervised map and object class
learning
27Some other projects
- Automatic topological map building Temizer
- Hierarchical POMDPs for multi-scale localization
Theocharous Murphy - Hierarchical abstraction for factored MDPs
Steinkraus - Learning object segmentation from video Ross
28Automatic topological map building
- Previous system did offline learning of
topological map from labeled data - Goal do online, unsupervised learning
- Rooms (states) are regions for which local
visual navigation suffices
29Hierarchical POMDPs
- Hierarchical model supports more efficient
learning, inference - (state estimation), and planning
600 states
Vertical transitions
horizontal transitions
1200 states
30Hierarchical abstraction for factored MDPs
- Decompose domain using different abstractions
- Dynamically adjust levels of abstraction based on
current state and goal
- Make decisions at highest possible level
perception
action
current planning problem
31Learning object segmentation from video data
- Videos contain moving objects, which are easy to
segment from background. - Goal learn model (MRF) to infer object
boundaries in static images.