Title: CatchUp: A Data Aggregation Scheme for VANETs
1Catch-Up A Data Aggregation Scheme for VANETs
- Bo Yu, Jiayu Gong, Cheng-Zhong Xu
- Dept. of ECE, Wayne State Univ.
- ACM VANET08
2Outline
- Introduction
- Related Work
- Motivation
- Aggregation Scheme
- Analysis
- Simulation
- Conclusion
3Introduction
- Traffic Information Dissemination
- Each vehicle periodically detects the traffic
conditions around it, and then, forwards the
information to vehicles following behind it - Redundant Data Limited Bandwidth
- Multiple redundant copies for the same traffic
status - Consuming a considerable amount of bandwidth
4Data Aggregation
- A useful technique to reduce data redundancy and
improve communication efficiency - Two aspects
- Routing-related (our focus)
- How two reports can meet each other at the same
time at the same node - Data-related
- Coding, calculation, and compression of
aggregatable data
v1
v3
r1 (30mph)
r3?r1 r2 ((3035)/232.5mph)
v2
r2 (35mph)
5Related Work
- Structured Aggregation
- A routing structure, forwarding tree, is
maintained to ensure reports can be forwarded to
the same node at the same time - Widely used in sensor networks, but infeasible in
VANETs
6Related Work (Cont.)
- Structureless Aggregation
- Randomized Waiting
- Wait for a random period before forwarding to the
next hop - During the waiting period, more reports can be
received and aggregated - Periodical Waiting
- TrafficView, SOTIS
- Wait for a fixed period before forwarding to the
next hop - An arising question how long should it wait to
achieve better aggregation performance?
7Motivation
- Two Properties of VANETs
- Channel Eavesdropping
- Every node is able to receive reports being
transmitted in the channel and log them into its
local database - Traffic Information is not delay-sensitive
- Even a delay of tens of seconds is still
acceptable
8Motivation (Cont.)
- Determine waiting time based on local
observations of individual vehicles - Challenge outdated and incomplete knowledge
r1 is ahead, so r2 should speed up and catch up
with r1
r1
r2
v1
v2
v3
9Distributed MDP Model
- s world state
- o observation
- b internal state
- a action
- SE State Estimator
- ? Decision Maker
10Distributed MDP Model (Cont.)
- action a WALK, RUN
- the propagation speed of a report (how fast
shall we propagate a report) - can be transformed into two different delays
before forwarding to the next hop - observation o eavesdropped reports
- an observation is a tuple ltreport, time_stamp,
action(WALK/RUN)gt - internal state b estimated position of a report
- b(r,pt) the probability that report r is at
position p at time t
11Expected Future Reward
- Objective
- To find a policy which maximizes the expected
future reward - Policy p
- a sequence of actions to be performed for a given
report in the future - Expected Future Reward
-
, - ? - a future discount factor
- wt - the expected reward at time t (the saved
communication overhead due to aggregation of the
reports)
12Expected Future Reward (Cont.)
- Virtual Report r0
- To encourage some reports to speed up, but the
others to slow down - Is supposed to be always following the current
report - Can be configured according to the average
frequency of the event source - Internal States
- (r0,r1 ,r2 ,r2,)
- Total Expected Reward
13Decision Tree
- To find the optimal policy p(a0,a1,a2,)
14Other Issues
r1r2
r1r2
r2
v1
v2
v3
v4
r1
X
r1
r1
15Other Issues (Cont.)
- How to judge whether a report is contained by
another aggregated report - r1 ? r3 ? where r3r1r2
- Bloom Filter
- is a space-efficient probabilistic data structure
that is used to test whether an element is a
member of a set
16Property
- All reports from a given road section and from a
given time period can be aggregated into an
overview report - The convergence time upper bound
- The convergence distance upper bound
17Simulation
- Based on NS2 and GrooveNet
- Compared to Randomized Waiting
18Results
- CATCHUP(100,1000) - walking speed at 100m/s,
running speed at 1000m/s - CATCHUP(200,2000) - walking speed at 200m/s,
running speed at 2000m/s - For CATCHUP, the aggregation operations mainly
reside within the first 5 km. - For Randomized Waiting, the aggregation
operations are distributed all over the
propagation distance.
19Results (Cont.)
- Before running, scale them to the same total
delay - For CATCHUP, the delay mainly reside within the
first 4 km - CATCHUP trades increased delay for reduced
communication overhead - For Randomized Waiting, the delay is linear
20Conclusion
- We studied the adaptive control of forwarding
delay in data aggregation in VANETs - Aggregation is a tradeoff between delay and
communication overhead - We make the delay more controllable in a manner
that a report has a better chance to be
aggregated with other reports
21THANKS!
Bo Yu, Jiayu Gong, Cheng-Zhong Xu Dept. of ECE,
Wayne State Univ. ACM VANET08