Title: IS 3974 Human control of multiple robots
1IS 3974 Human control of multiple robots
2USAR Challenge
3Robin Murphys robots at the World Trade Center
4Background..
- USARsim was developed as a research tool for an
NSF project to study Robot, Agent, Person Teams
in Urban Search Rescue - Katia Sycara CMU- Multi agent systems
- Illah Nourbakhsh CMU/NASA- Field robotics
- Mike Lewis Pitt- human control of robotic teams
- Paul Scerri- Agent teamwork team plans
5We wanted it to
- Study Teams of Robots controlled by People
6ITR Timeline
Experiments
Sim Valid ation
Single robot studies Attitude Camera control (2)
Multi-Robot Team experiments
Competition
Demo Am Open Pgh
3rd Pl Am Open New Orleans
Am Open Robocup Atlanta Osaka
? ? ? ?
Simulator
Oct 2002
Oct 2003
Oct 2004
Oct 2005
Oct 2006
7Seminar Timeline
Develop Interface HRI for teams
July 13-17 Robocup Osaka (physical virtual)
Experiment with team plans for virtual physical
domains
Add Machinetta to PER, Tarantula, pioneer?
Add Machinetta to USARsim
May 7-10 American Open (physical Robots)
Fix Virtual Robot rules
Simulator validation
Jan 2005
Feb 2005
Mar 2005
Apr 2005
May 2005
8Seminar objectives
- Explore area of Multi Agent systems with focus on
BDI approaches using plan libraries - Integrate machinetta proxies with USARsim to
provide a testbed for studying MAS - Prepare robot team for Virtual Rescue Robot
league demo competition - See what if any of this we can transfer to the
physical robots
9Strategy vs. Dropped LeadsWhy we need something
between Robocup Rescue the NIST arenas
- The Devil is in the details.. USAR missions fail
because the operator cant figure out - Why the robot is stuck (all of us)
- That the robot has flipped (Robin Murphy)
- That the camera never pointed at the victim
- That the camera is pointing at a victim but we
didnt see .Etc! - We can be strategically perfect still fail
- There needs to be a level of planning execution
that mixes strategy sensing control issues
10(No Transcript)
11NISTs Existing Transportable ArenasFORMING A
CONTINUUM OF CHALLENGES(from Jacoff et al. 2003)
- Yellow Region
- Simple to traverse, no agility requirements
- Planar (2-D) maze
- Isolates sensors with obstacles/targets
- Reconfigurable in real time to test mapping
- Orange Region
- More difficult to traverse, variable floorings
- Spatial (3-D) maze, stairs, ramp, holes
- Similarly reconfigurable
- Red Region
- Difficult to traverse, unstructured environment
- Simulated rubble piles, shifting floors
- Problematic junk (rebar, plastic bags, pipes)
12Human Factors Challenges
- World through a straw (restricted FOV)
- Camera control for search/navigation
- Survey knowledge (mapping environment) from
restricted FOV impeded movement - Visual smearing from close surfaces
- unfamiliar ground level perspective
- Difficult distance judgments from degraded 2D
image - Difficult orientation judgments from visual cues
in disorderly environment - Difficult locomotion due to out-of-view
negative obstacles
13Multi-Robot Problems
- Common mapping awareness
- Perceptual cooperation (you see what Im stuck
on) - Exploiting heterogeneous capabilities
- Daisy chaining communication modeling
- Team planning in failure-prone environments
- Human interaction/control of teams
- Etc.
14Simulation Desiderata
- Expense and availability of simulation hardware
and software to USAR robotics community - Ease of programming to reflect targeted aspects
of design - Fidelity of simulation w.r.t. aspects of design
to be tested
15Simulation Requirements
- Video feed for teleoperation and visual search
and identification - Sensor simulation- for autonomous control and
fused displays - Simulated robot dynamics- for teleoperation and
autonomous control - Multiple entity simulation- to allow interaction
and cooperation among teams of robots
16Why a Game Engine?(Lewis Jacobson, CACM 2002)
- Hardware
- Best available graphics now on Nvidia/ATI gpus
for commodity priced pcs - Software
- Cheap
- Takes advantage of current gpu features
- Sophisticated IDEs for both behaviors (mods) and
environments (levels) - Client-server architecture to support multiple
robots - Ranging commands used by bots provide hooks
needed for sensor simulation - Sophisticated physics engines such as the Karma
engine allow high fidelity kinematical modeling
17Why the Unreal Engine?
- Lewis, M. Jacobson, J. (2002) Game Engines in
Research Communications of the Association for
Computing Machinery, NY ACM 45(1) - Good graphics, good physics, object orientation,
good tools, good documentation - Unreal is the defacto game engine used in
research - Pitt PARC- multiscreen (CAVE-like) displays
- John Laird/ U Mich- AI test environment
- J. Anderson C. Leibiere- Act-R characters
- K. Sycara, J. Hodgins, G. Sukanthar- synthetic
characters - Architectural modeling (Notre Dame), etc.
- Mike Zyda NPS- Americas Army, etc.
- ITC center at USC, Alterne project University of
Teeside others (using our cave set-up) - (crystal space, garage games, etc.)
18Alternatives
- Excellent 3D graphics drivers from VR community
like UCF tools - Good physics engines like ODE (being used for 3D
soccer simulation) - But hard to put them together..
- With ODE you have to work with OpenGL in Xwindows
lose the standard formats that let you use
modeling tools like 3D studio max or Maya - With VR tools you have to program physical
behaviors yourself
19Robot control architecture (simulation)
G A M E B O T S
Attached Unreal Spectator
Unreal Server
Control software
20Karma Physics enginerigid multi-body simulation
Vehicle class lets us characterize robot
kinematics precisely..
21Sensor hierarchy
The trace command gives ground truth for range
information Sound is attenuated according to
distance Human motion (pyroelectric) uses line of
sight magnitude of movement
22Sensor Class
- HiddenSensor This Boolean value is used to
indicate whether the sensor will be visually
showed in the simulator. - MaxRange It is the maximum distance that can be
detected. - Noise It is the relative random noise amplitude.
With the noise, the data will be data data
random(noise)data - OutputCurve Its the distortion curve. It is
constructed by a series of points that describe
the curve.
23Sensor visualizations (courtesy of Player)
24Can be controlled through
- Native Gamebots interface
- Pyro middleware (including a hack for Windows)
- Player.. (USARsim plays the role of Stage)
25The Arenas
26Data Collection at NIST ArenasGaithersburg, MD
27Arena Simulations
- ProEngineer solid model converted to Unreal
format - Digital photographs used to create textures to be
applied to the model - Glass, mirrors, orange safety fencing, and other
special effects added - Rubble, debris, and victim models added to
simulation - Robot characteristics adapted from Karma vehicle
class
28Illumination levels in Lux
29Yellow Arena
30Yellow Arena Simulation
31Fisheye view of Orange Arena(from Jacoff et al.
2003)
32Simulation of Orange Arena
33(No Transcript)
34Red Arena Simulation
35Orange Arena Platform photo simulation
36Parts materials to build your own
37The Robots
38Activmedia Pioneer P2AT
- P2AT has
- Four wheels
- Skid-steer
- Size 50cm x 49cm x 26cm
- Wheel diam 21.5cm
- equipped with
- PTZ camera
- Front sonar ring
- Rear sonar ring
39Pioneer P2DX
40iRobot ATVJr
41CMU experimental Personal Explorer Rover (PER)
42CMU experimental Corky
43First generation interface(runs with both Corky
simulation)
44Observations
- Both Corky and the simulation are very difficult
to control with same problems - Nonvisible obstacles
- Surface blinding
- Disorientation
- Camera control
- Difficulty with stairs or rubble
45Validation
- We are currently planning validation studies
comparing real simulated PER Pioneer robots
in Orange Arena - Hardware Sensors, Kinematics
- Behavior Sensors x Kinematics
- Automation scripted behaviors
- HRI human x automation
- Stefano Carpin has a student doing a validation
for laser rangefinder model
46SimpleUI sample interface
47Pyro Controls
48Hardware Issues
- Requires current (2000 ) pentium 4 pc
- For Linux requires Nvidia video card
- Server uses few resources
- Clients (attached spectators) are resource hogs
- Without modification server handles 32 spectator
clients unlimited robots without spectators - To add robot
- Create robot in simulation
- Connect socket to control robot
- Add spectator to provide camera view
49Issues USAR/USARsim Differences
- Size of venue
- Could add dynamic events such as collapses
- Hazards such as fire, water, etc.
- Radio drop out
- Smoke, etc.
- Not this year but 2006?
50Download simulator docs athttp//usl.sis.pitt.e
du/ulab
51Steve Burions search for life(exchange from
EPFL)
52Interface from 04 American Open
53Current Activities
- Simulator Validation Study
- Team control studies (using machinetta)
- US Open Robocup in Osaka
- Physical League
- Adding sensors (laser range finder?) SLAM
- Adding new platforms pioneer, flipping robot,
tarantula toy - Virtual League
- Figuring out how to use control up to 10 robots!
54The Idea for USARsim in Robocup
- High fidelity simulation environment that
provides - Detailed models of USAR environments
- Detailed models of robots sensors
- The user brings
- Individual robot team control code
- User interface
55USARsim Architecture
56USARsim Architecture
57Architecture with proxies
Control Interface
Control Interface
Video Feedback
Video Feedback
Unreal Client
Unreal Client
(Attached spectator)
(Attached spectator)
Network
Gamebots
Unreal Data
Control Data
Unreal Engine
Models (Robots model, Sensor
Map
model, victim model etc.)
58Coordinating Robots
- Apply theory of teamwork
- Proactive, cooperative, flexible
- Agents have detailed model of rest of team
- Joint intentions, STEAM
- As opposed to swarms, central, markets, fixed,
etc. - Variety of algorithms required to work together
- Most known to be NP-complete (or worse)
- For large teams, robustness is a high priority
- E.g.,
- Planning
- Communication
- Role allocation
- Failure detection
- .
59Reactive Team-Oriented-Plans
- Robots jointly execute Team Oriented Plans
- Plans specify activities required to achieve team
goals - Human designer or offline planner designs plans
- Execution proceeds according to STEAM (Tambe,
1997) - Implements theory of joint intentions
- Plans are reactively, dynamically instantiated
- Can be multiple instances of same plan, with
different parameters - Plans specify set of roles that need to be
performed to achieve goals - Role is performed by a single Robot
- Which Robot should perform the role is not
specified - - allocation of roles to Robots is performed in a
subsequent phase
60Team-Oriented-Programs
61Resolving Conflicts between Team Plans
- When Robot detects preconditions of a plan,
instantiates that plan - May result in multiple instances of same plan or
different plans for same goal - Team member that instantiates the plan need not
be the agent that executes any part of that plan - Basic aim is to limit the number of plans created
- Simple scheme for resolving remaining conflicts
- Three models, increasing generality
- 1 Location of all Robots is known
- Fixed rule, e.g., closest
- 2 Robots know who detected preconditions
- Only those detecting create plans
- 3 - No knowledge assumed
- Probabilistically create, if dont detect
- that another agent has
62Allocating Roles
- Instantiated plans result in set of roles that
need to be allocated to team members - May not be possible to allocate all roles
- Heterogeneous capabilities of team members
- Location
- Mobility
- sensors
- What other roles could it perform?
63Teamwork via Proxies
- Each team member has a proxy
- Proxy performs routine coordination activities
- Proxies encapsulating teamwork models successful
concept - E.g., TEAMCORE, GRATE and COLLAGEN
- Proxies implement Team-Oriented-Programs (TOPS)
- High level description of team activity
- Coordination details automatically handled by
proxies
64Machinetta
- Reusable software proxy architecture
- Available in the public domain
- Used in several domains, including 2 DARPA
projects - Provides support required for effective teamwork
Communication Messages to and from other
proxies Coordination Reasons about required
communication State Proxies model of the
world Adjustable Autonomy Reasons about
interactions with RAP Adjustable Autonomy
Reasons about interactions with RAP RAP
Interface Communication to and from RAP
Machinetta Proxy Architecture http//teamcore.usc.
edu/doc/Machinetta/
65Presentation/reading Topics
- 1/20 Teamcore Machinetta (Steve /or Paul)
- 1/27 USARsim (Jijun)
- 2/3 BDI and other coordination schemes
- 2/10 Behavior-based architectures (subsumption,
etc.) - 2/17 Team oriented programming adjustable
autonomy
66Presentation/reading Topics
- 2/24 SLAM
- 3/3 Displays using synthetic views
- 3/10 (break)
- 3/17 Control of multiple robots (independent)
- 3/17 Control of multiple robots (cooperating)
- 3/24 Control of multiple robots (robocup)
67Groups
- Integrate USARsim Machinetta
- Group SLAM for USARsim
- TOP develop test
- Interface Control strategies
- Rule development for Robocup demo league (by
February)