Automated Construction of Environment Models by a Mobile Robot - PowerPoint PPT Presentation

1 / 50
About This Presentation
Title:

Automated Construction of Environment Models by a Mobile Robot

Description:

Construct a mobile platform that is capable of autonomous localization and navigation. ... Use those views to construct a photometrically and geometrically ... – PowerPoint PPT presentation

Number of Views:65
Avg rating:3.0/5.0
Slides: 51
Provided by: csCol9
Category:

less

Transcript and Presenter's Notes

Title: Automated Construction of Environment Models by a Mobile Robot


1
Automated Construction of Environment Models by a
Mobile Robot
  • Thesis Proposal
  • Paul Blaer
  • January 5, 2005

2
Task Construction of Accurate 3-D
models
3
Task Construction of Accurate 3-D
models
4
Problem Manual Construction
  • Even with sophisticated tools, many tasks are
    still accomplished manually
  • Planning of scanning locations
  • Transportation from one scanning location to the
    next, possibly under adverse conditions
  • Accurately computing the exact location of the
    sensor

5
Approach Automate the Process
  • Construct a mobile platform that is capable of
    autonomous localization and navigation.
  • Given a small amount of initial information about
    the environment, plan efficient views to model
    the region.
  • Use those views to construct a photometrically
    and geometrically correct model.

6
Proposed Contributions
  • An improved 2-D view planning algorithm used for
    bootstrapping the construction of a complete
    scene model
  • A new 3-D voxel-based next-best-view algorithm
  • A topological localization algorithm combining
    omnidirectional vision and wireless access point
    signals.
  • Voronoi diagram-based path planner for
    navigation.
  • A model construction system that fuses the view
    planning algorithms with the robots navigation
    and control systems.

7
Large Scale 3-D ModelingLiterature
  • 3D City Model Construction at Berkeley Frueh,
    et al, 2004, 2002
  • Outdoor Map Building at University of Tsukuba
    Ohno, et al 2004
  • MIT City Scanning Project Teller, 1997
  • Klein and Sequeira, 2004, 2000
  • Nuchter, et al, 2003

8
View Planning Literature
  • 1. Model Based Methods
  • Cowan and Kovesi, 1988
  • Tarabanis and Tsai, 1992
  • Tarabanis, et al, 1995
  • Tarbox and Gottschlich, 1995
  • Scott, Roth and Rivest, 2001
  • 2. Non-Model Based Methods
  • Volumetric Methods
  • Connolly, 1985
  • Banta et al, 1995
  • Massios and Fisher, 1998
  • Papadopoulos-Organos, 1997
  • Soucey, et al, 1998
  • Surface-Based Methods
  • Maver and Bajcsy, 1993
  • Yuan, 1995
  • Zha, et al, 1997
  • Pito, 1999
  • Reed and Allen, 2000
  • Klein and Sequeira, 2000
  • Whaite and Ferrie, 1997
  • 3. Art Gallery Methods
  • Xie, et al, 1986
  • Gonzalez-Banos, et al, 1997
  • Danner and Kavraki, 2000
  • 4. View Planning for Mobile Robots
  • Gonzalez-Banos, et al, 2000
  • Grabowski, et al, 2003
  • Nuchter, et al, 2003

9
Overview of Our System
  • Platform
  • Steps in Our Method
  • Initial Modeling Stage
  • Planning the Robots Paths
  • Localization and Navigation
  • Acquiring the Scan
  • Final Modeling Stage
  • Testbeds

10
Overview of Our SystemThe Platform
GPS
Scanner
DGPS
Autonomous Vehicle for Exploration and Navigation
in Urban Environments
Network
Camera
PTU
Compass
Sonar
PC
11
Overview of Our SystemThe Method
  • Initial Modeling Stage
  • Goal is to construct an initial model from which
    we can bootstrap construction of a complete
    model.
  • Compute a set of views based entirely on a known
    2-D representation of the region to be modeled.
  • Compute an efficient set of paths to tour these
    view points
  • Final Modeling Stage
  • Voxel-based 3-D method to sequentially choose
    views that fill in gaps in the initial model.

12
Initial Modeling Stage
  • Given initial 2-D map of the scene.
  • In this stage, assume that if you see all 2-D
    edges of the map, youve seen all 3-D façades.
  • Solve the planning as a variant of the Art
    Gallery problem.

13
Initial Modeling Stage
  • Problems with the Art Gallery approach
  • Traditional geometric approaches assume that the
    guards can see 360o around with unlimited range,
    ignoring any constraints of the scanner.
  • A view of the 2-D footprint of an obstacle does
    not necessarily mean that we have seen the entire
    façade. There may be interesting 3-D structure
    above.

14
Initial Modeling Stage
  • A randomized algorithm for the 2-D problem
  • First choose a random set of potential views in
    the free space

15
Initial Modeling Stage
100 initial samples
16
Initial Modeling Stage
  • A randomized algorithm for the 2-D problem
  • First choose a random set of potential views in
    the free space
  • Compute the visibility of each potential view

17
Initial Modeling Stage
18
Initial Modeling Stage
  • A randomized algorithm for the 2-D problem
  • First choose a random set of potential views in
    the free space
  • Compute the visibility of each potential view
  • Clip the visibility of each potential view such
    that the constraints of our scanning system are
    satisfied.

19
Initial Modeling Stage
  • Constraints we have added to the basic randomized
    algorithm
  • Minimum and maximum range
  • Maximum grazing angle
  • Field of view
  • Overlap constraint

Scanner
Minimum Range (in our case 1m).
Maximum Range (in our case 100m).
20
Initial Modeling Stage
  • Constraints we have added to the basic randomized
    algorithm
  • Minimum and maximum range
  • Maximum grazing angle
  • Field of view
  • Overlap constraint

Grazing Angle
21
Initial Modeling Stage
  • Constraints we have added to the basic randomized
    algorithm
  • Minimum and maximum range
  • Maximum grazing angle
  • Field of view
  • Overlap constraint

22
Initial Modeling Stage
  • Constraints we have added to the basic randomized
    algorithm
  • Minimum and maximum range
  • Maximum grazing angle
  • Field of view
  • Overlap constraint

23
Initial Modeling Stage
  • A randomized algorithm for the 2-D problem
  • First choose a random set of potential views in
    the free space
  • Compute the visibility of each potential view
  • Clip the visibility of each potential view such
    that the constraints of our scanning system are
    satisfied.
  • Choose a approximate minimum subset of the
    potential views to cover the entire set of 2-D
    obstacles

24
Initial Modeling Stage
9 chosen view points
25
Initial Modeling Stage
A real world example
26
Initial Modeling Stage
A real world example (1000 initial samples, 42
chosen views, 96 coverage)
27
Planning the Robots Paths
  • Given a 2-D map of the region, compute safe
    paths for the robot to travel.
  • Keep the robot as far away from the two closest
    obstacles.
  • Accomplished by generating the generalized
    Voronoi diagram of the region and traveling along
    the boundaries of the Voronoi cells.

28
Planning the Robots Paths
  • Approximate the Generalized Voronoi Diagram
  • Approximate the polygonal obstacles with discrete
    points.
  • Compute the Voronoi diagram.
  • Eliminate the edges that are inside obstacles or
    intersect obstacles.

29
Planning the Robots Paths
30
Planning the Robots Paths
  • Approximate the Generalized Voronoi Diagram
  • Approximate the polygonal obstacles with discrete
    points.
  • Compute the Voronoi diagram.
  • Eliminate the edges that are inside obstacles or
    intersect obstacles.
  • Use a shortest path algorithm such as Dijkstras
    algorithm to find paths along the Voronoi graph.

31
Planning the Robots Paths
32
Planning the Robots Paths
  • Need to generate a tour for the robot to visit
    all the initially selected view points.
  • This can be treated as a Traveling Salesman
    Problem and solved with any number of
    approximations.
  • To generate edge weights, we first compute our
    safe Voronoi paths between all viewpoints. We
    use the lengths of those paths as the edge
    weights for our graph.

33
Planning the Robots Paths
34
Localization and Navigation
  • Existing system uses a combination of
  • GPS
  • Odometry
  • Attitude Sensor
  • Fine grained visual localization (Georgiev and
    Allen, 2004)
  • Problems
  • GPS can fail in urban canyons
  • Odometry is unreliable because of slipping and
    cumulative error
  • Fine grained visual localization system needs an
    existing position estimate

35
Coarse Localization
  • Coarse Localization System
  • Histogram Matching with Omnidirectional Vision
  • Fast
  • Rotationally-invariant

36
Coarse Localization
  • Coarse Localization System
  • Histogram Matching with Omnidirectional Vision
  • Fast
  • Rotationally-invariant
  • Wireless signal strength of Access Points
  • Use existing wireless infrastructure to resolve
    ambiguities in location.
  • Look at the signal strengths to all visible base
    stations at a given location and compare against
    database.

37
Acquiring the Scan
38
Final Modeling Stage
  • The initial modeling stage will result in an
    incomplete model
  • Undetectable 3-D occlusions
  • Previously unknown obstacles
  • Temporary obstacles
  • Need a second modeling stage to fill in the holes.

39
Final Modeling Scan
40
Final Modeling Stage
  • We store the world as a voxel grid.
  • For view planning of large scenes the voxels do
    not need to be small.
  • Initial voxel grid is populated with the scans
    from the first stage.
  • If a voxel has a data point in it, it is marked
    as seen-occupied.
  • Unoccupied voxels along the straight line path
    from that point back to its scanning location
    that are marked as seen-empty.
  • All other voxels are marked as unseen.

41
Final Modeling Stage
  • We use the known 2-D footprints of our obstacles
    to mark the ground plane voxels as occupied or
    potential scanning locations.

42
Final Modeling Stage
  • For each unseen voxel that borders on an empty
    voxel we trace a ray back to all scanning
    locations.
  • If ray is not occluded by other filled voxels and
    it satisfies the scanners other constraints,
    that potential viewing locations counter is
    incremented.
  • The potential viewing location with the largest
    count is chosen.
  • A new scan is taken and the process repeats until
    there are no unseen voxels bordering on empty
    voxels.

43
Final Modeling Stage
  • Additional Constraints
  • Range constraint the scanners minimum and
    maximum range is considered. If the ray is
    outside this range, it is not considered.
  • Overlap constraint for each view we can also
    keep track of how many known voxels it can view
    and require a minimum overlap for registration
    purposes.
  • Traveling distance constraint weight more
    heavily views that are closer to the current
    position.
  • Grazing angle constraint this constraint is
    harder to implement since no surface information
    is stored.

44
Final Modeling Stage
45
Final Modeling Stage
46
Final Modeling Stage
47
Final Modeling Stage
Initial View
Next Best View
48
Testbeds Columbia Campus
49
Testbeds Governors Island
50
Road Map to the Thesis
  • A topological localization algorithm
    implemented and tested in complicated outdoor
    environments (Blaer and Allen, 2002 and 2003).
  • A Voronoi-based path planner implemented and
    tested (Allen et al, 2001).
  • An 2-D view planning algorithm for bootstrapping
    the construction of a complete model tested on
    simulated and real world data. Additional
    constraints and testing are needed.
  • A voxel-based method for choosing next-best views
    initial stages of the algorithm have been
    tested on simulated data.
  • Integrate these algorithms into the robot to
    build a complete system.
Write a Comment
User Comments (0)
About PowerShow.com