Bryan Willimon - PowerPoint PPT Presentation

1 / 47
About This Presentation
Title:

Bryan Willimon

Description:

Interactive Perception for Cluttered Environments Bryan Willimon Master s Thesis Defense – PowerPoint PPT presentation

Number of Views:209
Avg rating:3.0/5.0
Slides: 48
Provided by: Brya1181
Category:

less

Transcript and Presenter's Notes

Title: Bryan Willimon


1
Interactive Perception for Cluttered Environments
  • Bryan Willimon
  • Masters Thesis Defense

2
  • Visually-guided Manipulation Traditional Approach

Sense
Plan
Act
Manipulation-guided Sensing Interactive
Perception
Act
Sense
Plan
3
Previous Related Work on Interactive Perception
  • Segmentation through image differencing

Learning about prismatic and revolute joints on
planar rigid objects
D. Katz and O. Brock. Manipulating articulated
objects with interactive perception. ICRA 2008
4
Goal of Interactive Perception
Pile of Stuff
Separate Object
Classify
Learn
5
Our Approach
  • Extraction
  • Graph-based Segmentation
  • Stereo Matching
  • Determining Grasp Point
  • Classification
  • Color Histogram Labeling
  • Skeletonization
  • Monitoring Object Interaction
  • Labeling Revolute Joints using Motion

6
Extraction Process
7
Graph-based Segmentation
  • Separates the image into regions based on
    features of the pixels (e.g., color)
  • Breaks apart the foreground and background
  • Classify background as any pixel that shares the
    same color label as a border pixel.
  • Subtracts background to leave only foreground

8
Stereo Matching
  • Uses two different cameras from two slightly
    different projections to provide a sense of depth
  • Depth information from foreground only is
    considered
  • Foreground image from previous step is used as a
    mask to erase any background information
  • Object on top of pile minimizes disturbance

9
Determining Grasp Point
  • Calculate the maximum chamfer distance within
    the white area
  • Use the outline of the white area as the starting
    point for the chamfering process
  • Using chamfer distance instead of centroid
    handles concave objects

10
Classification
11
Color Histogram Labeling
  • Use color values (RGB) of the object to create a
    3-D histogram
  • Each histogram is normalized by number of pixels
    in object to create a probability distribution
  • Each histogram is then compared to histograms of
    previous objects for a match using histogram
    intersection
  • White area is found by using same technique as in
    graph-based segmentation and used as a binary
    mask to locate object in image

12
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 1
13
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 3
14
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 5
15
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 7
16
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 9
17
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 10
18
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 11
19
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 13
20
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 15
21
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 17
22
Skeletonization
  • Use binary mask from previous step to create a
    skeleton of the object
  • Skeleton is a single-pixel wide outline of the
    area
  • Prairie-fire analogy

Iteration 47
23
Monitoring Object Interaction
  • Use KLT feature points to track movement of the
    object as the robot interacts with it
  • Only concerned with feature points on the object
    and disregard all other points
  • Calculate distance between each feature point
    every flength frames (flength5)

24
Monitoring Object Interaction (cont.)
  • Idea Like features keep a constant
    intra-distance, features from different groups
    have variable intra-distance
  • Features were separated into groups by measuring
    the intra-distance amount after flength frames
  • If the intra-distance between two features
    changes by less than a threshold, then they are
    within the same group
  • Otherwise, they are within
  • different groups
  • Separate groups relate to
  • separate parts of an object

25
Labeling Revolute Joints using Motion
  • For each feature group, create an ellipse that
    encapsulates all features
  • Calculate major axis of ellipse using PCA
  • End points of major axis correspond to a revolute
    joint and the endpoint of the extremity

26
Labeling Revolute Joints using Motion (cont.)
  • Using the skeleton, locate intersection points
    and end points
  • Intersection points (Red) Rigid or Non-rigid
    joints
  • End points (Green) Interaction points
  • Interaction points are locations that the robot
    uses to push or poke the object

27
Labeling Revolute Joints using Motion (cont.)
  • Map estimated revolute joint from major axis of
    ellipse to actual joint in skeleton
  • In the case of groups with size 1, the revolute
    joint is labeled to be the closest intersection
    point
  • After multiple interactions from the robot, a
    final skeleton is created with revolute joints
    labeled (red)

28
Experiments
  • Items used for experiments
  • 3 Logitech Quick-Cam Pro webcams (2 for stereo
    system and 1 for classifying)
  • PUMA 500 robotic arm (or EZ gripper)
  • 2 areas were used and located near each other for
    easy use of the robotic arm
  • One was designated as extracted table and the
    other as classification table

29
Results
Socks and shoes in a hamper EZ gripper
  • Toys on the floor PUMA 500

Recycling bin EZ gripper
30
Results Toys on
the(cont.)
floor
Final Skeleton used for Classification
31
Results Toys on
the(cont.)
floor
Final Skeleton used for Classification
32
Results Toys on
the(cont.)
floor
Final Skeleton used for Classification
33
Results Toys on
the(cont.)
floor
Final Skeleton used for Classification
34
Results Toys on
the(cont.)
floor
Classification Experiment
1
2
3
4
5
6
7
8
35
Results Toys on
the(cont.)
floor
Classification Experiment
Rows Query image, Columns Database image
36
Results Toys on
the(cont.)
floor
Classification Experiment
Without use of skeleton
37
Results Toys on
the(cont.)
floor
Classification Experiment
With use of skeleton
38
Results
Recycling(cont.)
bin
39
Results
Recycling(cont.)
bin
Without use of skeleton
40
Results
Recycling(cont.)
bin
With use of skeleton
41
Results Socks
and(cont.)
Shoes
42
Results Socks
and(cont.)
Shoes
Only 1 image matched 5, skeleton could not be
used
43
Comparison of Related Work
  • Comparing objects of the same type to that of
    similar work
  • Pliers from our results compared to shears in
    their results

Our approach
Their approach
44
How is our work different?
  • Our approach handles rigid and non-rigid objects
  • Most of the previous work only considers planar
    rigid objects
  • We gather more information with interaction like
    a skeleton of the object, color, and movable
    joints.
  • Other works only look to segment the object or
    find revolute and prismatic joints
  • Our approach works with cluttered environments
  • Other works only handle a single object instead
    of working with multiple items piled together

45
Conclusion
  • This is a general approach that can be applied to
    various scenarios using manipulation-guided
    sensing
  • The results demonstrated that our approach
    provided a way to classify rigid and non-rigid
    objects and label them for sorting and/or pairing
    purposes
  • This approach builds on and exceeds previous work
    in the scope of interactive perception
  • This approach also provides a way to extract
    items out of a cluttered area one at a time with
    minimal disturbance
  • Applications for this project
  • Service robots handling household chores
  • Map-making robot learning about the environment
    while creating a map of the area

46
Future Work
  • Create a 3-D environment instead of a 2-D
    environment
  • Modify classification area to allow for
    interactions from more than 2 directions
  • Improve the gripper of the robot for more robust
    grasping
  • Enhance classification algorithm and learning
    strategy
  • Use more characteristics to properly label a
    wider range of objects

47
Questions?
Write a Comment
User Comments (0)
About PowerShow.com