Sensing Planning for Robotic Sensor Networks - PowerPoint PPT Presentation

1 / 82
About This Presentation
Title:

Sensing Planning for Robotic Sensor Networks

Description:

Sensing Planning for Robotic Sensor Networks – PowerPoint PPT presentation

Number of Views:64
Avg rating:3.0/5.0
Slides: 83
Provided by: volkan
Category:

less

Transcript and Presenter's Notes

Title: Sensing Planning for Robotic Sensor Networks


1
Sensing Planning for Robotic Sensor Networks
  • Volkan Isler
  • Rensselaer Polytechnic Institute

2
Outline
  • Robotic sensor networks operating in dynamic
    environments
  • Todays focus representative problems
  • Sensors selection
  • Sensor placement
  • Pursuit-evasion games
  • Selected ongoing projects
  • Robotic Routers
  • Human-Robot Interaction

3
A generic tracking scenario
Where am I?
4
A simple camera model
X
x
Image Plane
Optical center
5
Localization with perfect sensing
6
A simple camera model with uncertainty
X
x
Image Plane
Optical center
7
Target localization with uncertainty
8
The sensing model
  • Each sensor measurement corresponds to a convex,
    polygonal subset of the plane. (i.e.
    intersection of a finite number of half-planes)
  • The true location is contained in all the sets
  • Are there such sensors? Omni-directional cameras.
  • Estimation ?intersections of measurementsUncertai
    nty ? area of the intersection

9
The sensing model
  • Note that more measurements never hurt in terms
    of uncertainty.
  • There are cases where all sensors contribute to
    the estimation.
  • Example cameras on a circle, target at the
    center

10
The sensor selection problem
  • Ideally use all sensors. May not be feasible!
  • Given
  • A rough estimate of the robots location
  • locations of n sensors,
  • select k sensors so as to minimize the estimation
    area
  • Bicriteria Optimization Minimize cost (number
    of sensors), Maximize Utility (1/Uncertainty
    1/Area)

11
Why care?
Bad/Good 612
12
The sixth sensor theorem!
  • Let S be the set of all sensors
  • For any S and any given target location, there
    exists a subset S of S with S lt 6such that
  • Uncertainty(S) / Uncertainty(S) lt 2

If you are happy with a factor 2 approximation,
you never need more than 6 active sensors!
13
Overview of the proof
  • The Minimum Enclosing Parallelogram
  • Properties

C. Schwarz, J. Teich, A. Vainshtein, E. Welzl,
and B. L. Evans, Minimal enclosing parallelogram
with application, in SCG 95
14
Overview of the proof
  • Using these two properties, we can bound the area
    of the MEP.

15
Sensor Selection Algorithm
  • Let N be the number of halfplanes, N ? n
  • Intersect all n measurements O(N log N)
  • Compute the MEP O(N)

16
How to use this result
  • For n static sensors, we can partition the plane
    and compute a look-up table.
  • Can be computed offline.

Isler, Bajcsy. Sensor Selection for Bounded
Uncertainty Sensing Models. IEEE TASE, 2006
17
Recent Results
  • It turns out that 4 sensors are enough for a
    2-approximation (the proof has a similar flavor
    but uses minimum enclosing triangles)
  • Higher dimensions
  • 3D 9 approximation with 8 sensors
  • In d dimensions d(d-1) approximation with
    d(d1) sensors

Isler, Magdon-Ismail. Sensor Selection in
Arbitrary Dimensions. IEEE TASE, to appear
18
Placement of stereo cameras
  • In the previous problem, we took the placement as
    given
  • How about a good placement?
  • Lets start with stereo cameras
  • That is, only 2 cameras are chosen
  • Justification previous theorem does not utilize
    symmetry/cones experience
  • Whats the advantage?

19
Uncertainty in stereo
  • Can be represented in closed from

20
Placement Problem
  • Given
  • a planar workspace W
  • an uncertainty threshold U
  • Place minimum number of cameras such that
  • For p ? W, let c1(p) and c2(p) be the best choice
    of cameras to track p
  • WantU(p, c1(p), c2(p)) ? U for all p ? W

21
Initial Model
  • Cameras can be placed anywhere on the plane
  • No occlusions in the workspace
  • We make no assumptions about the shape of the
    workspace
  • Application Fire watch towers on a (flat) forest.

22
Result
  • Let OPT be the optimal algorithm which achieves
    U uncertainty with a placement of k cameras.
  • We present an algorithm that
  • uses at most ?k cameras, and
  • Guarantees ?U uncertainty
  • ? and ? are two constants. There is a trade-off
    between them.
  • Ill present the result for ?6 and ? 3.

23
Placement Algorithm
  • Has two phases. Phase 1

where
24
SelectSensors illustration
2R
R
R
25
Claim
  • If the optimal algorithm, OPT, achieves U with k
    cameras, then
  • (the number of centers) ? k
  • Proof idea
  • For each center, draw a disk of radius R around
    it
  • Note that these disks are disjoint
  • OPT must have at least one sensor in each disk

26
Suppose not
R
R
OPTs uncertainty gt R R / sin ? ? R2 ? U A
contradiction!
27
Phase II Place Cameras
2R
For each center, we place 3 cameras on the
vertices of an equilateral triangle. The total
number of cameras ? 3k
28
More precisely
29
Error Bound
  • For every point inside a circle, we can find two
    cameras (out of the three we placed) such that
    the uncertainty is at most 6U

30
So far
  • If OPT achives U uncertainty with k cameras
  • We can achive
  • 6 U uncertainty with 3k cameras, or
  • In general, it is possible to improve the
    uncertainty guarantee at the expense of using
    more cameras.

31
Recent results
  • Deal with occlusions
  • a more strict error metric
  • d1,d2 ? D
  • ? ? ?? ?- ?
  • hard constraints on the distance and the angle
    instead of a threshold on their product
  • Our algorithm uses at most O(OPT log(OPT))
    sensors and guarantees
  • d1,d2 ? D
  • ?/2 ? ?? ?- ?/2
  • And accommodates constraints on the set of
    candidate locations

Tekdas and Isler, ICRA 2007
32
Pursuit-Evasion on Graphs
  • Players restricted to vertices of a graph
  • Can move from u to v iff (u,v) is an edge
  • The goal is to capture the evader i.e. to be
    co-located at the same vertex
  • Visibility Models
  • e.g. A player located at v can see only N(v)
  • Motion Models
  • Players move simultaneously

33
The role of information
  • Global Visibility (Cops Robbers Game)
  • Introduced in Nowakowski Winkler83
  • Need unbounded of pursuers AignerFromme84
  • Characterization for special cases
  • 1-pursuer win dismantlable graphs
    BrightwellWinkler00
  • No Visibility (Hunter Rabbit Game)
  • One pursuer suffices
  • O(nm2) with random walks Aleliunas et al.79
  • O(n log(n)) algorithm Adler et al02

34
Limited (Local) Visibility
  • Players can see only their neighborhoods
  • As mentioned earlier, one hunter (pursuer) is
    enough if the rabbit (evader) has no visibility
  • Our case

A rabbit with local visibilitycan not be
captured by a single hunter
35
How many hunters to capture?
36
Two hunters always suffice
  • Theorem On any graph, two hunters suffice for
    capturing a rabbit with local visibility in
    expected polynomial time.
  • We present a randomized strategy.

37
Randomized Strategies
  • The hunters
  • make decisions based on the outcome of coin
    tosses
  • their strategy works against any rabbit strategy
  • The rabbit
  • knows the hunters strategy beforehand
  • does not know the outcome of the hunters coin
    tosses

38
Strategy Overview
  • Three phases
  • Phase 0 Locate the rabbit
  • Phase 1 Chase the rabbit
  • Phase 2 Set a trap and attack the rabbit
  • Will show The probability of capture tends to 1.

39
Phase 0 Locating the rabbit
  • Divide into rounds of length n
  • Guess the vertex v from where the rabbit will be
    visible at the end of the round
  • Go to v and wait till the end of the round
  • Probability of success is ? 1/n
  • After n log(n) rounds
  • Probability of success is ? 1-1/n
  • Using (1x) lt exp(x)
  • The hunters will locate the rabbit w.h.p.in no
    more than n2 log(n) time-steps

40
Chasing the rabbit in phase 1
The rabbit
Does not see H2
H2 Chases H1
H1 Chases the rabbit, occasionally attacks
41
Trapping the rabbit in phase 2
  • If H1 chases the rabbit for n steps, the rabbit
    must revisit a vertex v.
  • H2 stops at v and attacks when the rabbit enters
    N(v) for the first time through u.
  • Can do this, because the rabbit never sees H2.
  • Guess u, v and the revisit time.

u
v
42
Capture Time
  • At the end of the three phases
  • Probability of capture 1/n3
  • Total length of phases
  • O(n2 log n) w.h.p.
  • Overallthe rabbit will be captured in O(n6) time

43
The big picture
  • Key players in the evolution of information
    technology
  • Sensing (cameras, biosensors, )
  • Actuation (mobile nodes, pan-tilt cameras, )
  • Robotic sensor networks
  • Communication, sensing and actuation
  • Progress in these areas have been fairly
    independent

44
Robotic Sensor Networks
  • I am interested in the interplay between
    communication, actuation and sensing
  • Many challenges
  • Succinct problem formulations
  • Environment complexity, dynamic environments
  • Coordination among many entities ? yields hard
    optimization problems
  • Interaction with humans especially in
    health-care, elder-care, education
  • Lets see some examples

45
Robotic Routers
46
A third robot can relay messages and ensure
connectivity
Idea use robotic routers to keep mobile
clients connected to a base-station
47
Robotic Routers
  • Keep mobile users connected to a base station
  • In some cases, static deployment is wasteful
  • Examples farming, military applications
  • How should the routers move?

48
How to model the user?
  • Known trajectory robot whose trajectory is
    preprogrammed
  • Adversarial trajectory no clue about the
    trajectory. Assume the worst case, the user tries
    to break the connection as quickly as possible
    (becomes a pursuit-evasion game)

49
Does Mobility Help? Depends on the environment,
connectivity model and the speed of the routers
n/3 stationary nodes vs. 1 mobile router (which
is as fast as the target)
Visibility based communication two robots can
communicate if they can see each other
50
Examples
One robotic router
Two robotic routers
51
Results
  • Can compute optimal (i.e max connection time)
    algorithms for the single user case
  • Known trajectory ? dynamic programming
  • Adversarial trajectory ? game theoretic solution
    (modified dynamic programming)
  • Catch running time exponential in the number of
    robots

Isler and Tekdas, ICRA 2008
52
A Recent Human Robot Interaction Project
  • How to design human-friendly controllers?
  • Friendly ? Not causing stress
  • Incorporate stress measurements (from a Galvanic
    Skin Response Sensor) as feedback

53
(No Transcript)
54
A crossing task
  • Go Fast
  • Go Slowly

55
Control with GSR Feedback
  • A representative task
  • Track a person with a robot, in a way that does
    not cause distress
  • Applications health-care monitoring, robotic
    shopping carts, mobile teleconferencing (?)
  • Essentially a reinforcement learning task
  • Challenge hard to obtain samples (human
    experiments)

Meisner, Isler, and Trinkle. Autonomous Robots,
2008
56
Second HRI Project
  • Identify underlying principles of interaction
  • What makes a robot interactive?
  • Prototype system

57
GOAL
58
Shadow puppets
  • Current work
  • Parse the video into basic tokens (nod, shake
    etc)
  • Combine these into schemas scripts
    (initiation, greeting etc.) that exist in human
    interaction
  • Can show that understanding the underlying
    context can help in prediction
  • Work in progress demonstrate its utility in
    interaction

59
Roadmap Ahead
  • Many challenging problems at the intersection of
    sensing, actuation and communication
  • Novel optimization problems ? theoretical results
  • Application specific challenges ? proof of
    concept implementations and deployments
  • Interactions with humans becoming increasingly
    common and important

60
Thanks for your attention!
  • Group
  • Supported in part by NSF CCF-0634823 and NSF
    CNS-0707939.

Eric Meisner (HRI)
Onur Tekdas (Robotic Routers)
Nikhil Karnad (Pursuit-Evasion)
61
(No Transcript)
62
(No Transcript)
63
Recent results
  • Deal with occlusions
  • a more strict error metric
  • d1,d2 ? D
  • ? ? ?? ?- ?
  • hard constraints on the distance and the angle
    instead of a threshold on their product
  • Our algorithm uses at most O(OPT log(OPT))
    sensors and guarantees
  • d1,d2 ? D
  • ?/2 ? ?? ?- ?/2
  • And accommodates constraints on the set of
    candidate locations

64
HRI Project 2
  • How do people assign human attributes to other
    (non-human) objects?

65
(No Transcript)
66
(No Transcript)
67
HRI Research Agenda
  • Learn how people interact with each other for a
    given context
  • Build a robot that explores the context and
    interacts with people

68
What about other sensing models?
  • Case I convex but non-polygonal
  • In this case, we can approximate the uncertainty
    with a polygon efficiently.

69
What about other sensing models?
  • Case II non-convex uncertainty

Sometimes there is an efficient approximation
with a convex-shape.E.g. range and bearing
measurement
Sometimes not! E.g. range-only
70
What about other sensing models?
  • In this case one solution is to group small
    numbers of sensors and treat each group as a
    single sensor.

71
An application of the planar result
  • How to estimate the location of a target on a
    known plane (e.g. ground).
  • Using cameras (whose parameters are also known).

72
An application
  • Experimental setup Ruzena Bajcsys lab at UC
    Berkeley. 48 calibrated cameras.

73
Location of a target on a known plane
74
Intersections with the plane
75
Estimation Errors
Best
Worst
2
3
As good as it gets.
76
Location of a target on a known plane
The chosen ones
77
Adding more uncertainty
  • So far, we assumed that our estimate of the
    targets location is a point.
  • What if we have a region of uncertainty U?

78
Online SSP
  • Given an uncertainty region U, and a set of
    sensors
  • We select a subset S
  • An adversary picks1. the true location of the
    target (inside U) 2. sensors for the true
    location
  • Performance? Competitive ratio

79
Can we beat such an adversary?
  • Meaning constant competitive ratio?

C
What is a good pair of cameras?
U
B
A
80
Bounding the competitive ratio
  • Define Z as furthest U gets from a sensord as
    closest U gets to a sensor
  • Bad news The adversary can force an increase
    of O((Z/d)2) in our estimation area by changing
    the true location.
  • MoralMust choose sensors before uncertainty U
    gets too big.

81
Randomized vs. Deterministic
There is no deterministic strategy which
guarantees that the rabbit will be captured
regardless of its strategy.
82
Characterization?
  • A single hunter is not always enough
  • What is the class of hunter-win graphs?
  • We will present an algorithmic characterization
Write a Comment
User Comments (0)
About PowerShow.com