Title: Techniques for Building Long-Lived Wireless Sensor Networks
1Techniques for Building Long-Lived Wireless
Sensor Networks
- Jeremy Elson and Deborah Estrin
- UCLA Computer Science Department
- And
- USC/Information Sciences Institute
- Collaborative work with R. Govindan, J.
Heidemann, and SCADDS of other grad students
2What might make systems long-lived?
- Consider energy the scarce system resource
- Minimize communication (esp. over long distances)
- Computation costs much less, so
- In-network processing aggregation, summarization
- Adaptivity at fine and coarse granularity
- Maximize lifetime of system, not individual nodes
- Exploit redundancy design for low duty-cycle
operation - Exploit non-uniformities when you have them
- Tiered architecture
- New metrics
3What might make systems long-lived?
- Robustness to dynamic conditions Make system
self-configuring and self-reconfiguring - Avoid manual configuration
- Empirical adaptation (measure and act)
- Localized algorithms prevent single points of
failure and help to isolate scope of faults - Also crucial for scaling purposes!
4The Rest of the Talk
- Some of our initial building blocks for creating
long-lived systems - Directed diffusion - a new data dissemination
paradigm - Adaptive fidelity
- Use of small, randomized identifiers
- Tiered architecture
- Time synchronization
5Directed DiffusionA Paradigm for Data
Dissemination
- Key features
- name data, not nodes
- interactions are localized
- data can be aggregated or processed within the
network - network empirically adapts to best distribution
path, the correct duty cycle, etc.
1. Low data rate
2. Reinforcement
3. High data rate
6Diffusion Key Results
- Directed diffusion
- Can provide significantly longer network
lifetimes than existing schemes - Keys to achieving this
- In-network aggregation
- Empirical adaptation to path
- Localized algorithms and adaptive fidelity
- There exist simple, localized algorithms that can
adapt their duty cycle - they can increase overall network lifetime
7Adaptivity I Robustness in Data Diffusion
A primary goal of data diffusion is robustness
through empirical adaptation measuring and
reacting to the environment.
20 node failure
Because of this adaptation, mean latency (shown
here) for data diffusion degrades only
mildly even with 10-20 node failure.
10 node failure
no failures
8Adaptivity IIAdaptive Fidelity
- extend system lifetime while maintaining accuracy
- approach
- estimate node density needed for desired quality
- automatically adapt to variations in current
density due to uneven deployment or node failure - assumes dense initial deployment or additional
node deployment
zzz
zzz
zzz
zzz
9Adaptive Fidelity Status
- applications
- maintain consistent latency or bandwidth in
multihop communication - maintain consistent sensor vigilance
- status
- probablistic neighborhood estimation for ad hoc
routing - 30-55 longer lifetime with 2-6sec higher initial
delay - currently underway location-aware neighborhood
estimation
10Small, Random Identifiers
- Sensor nets have many uses for unique
identifiers(packet fragmentation, reinforcement,
compression codebooks...) - Its critical to maximize usefulness of every bit
transmitted each reduces net lifetime (Pottie) - Low data rates high dynamics no space to
amortize large (guaranteed unique) ids or
claim/collide protocol - So use small, random, ephemeral transaction ids?
- Locality is key random ids much smaller than
guaranteed unique ids if total net size large and
transaction density small - ID collisions lead to occasional losses
persistent losses avoided because the identifiers
are constantly changing - Marginal cost of occasional losses is small
compared to losses from dynamics, wireless
conditions, collisions
11Address-Free Fragmentation
AFF Allows us to optimize bits used for
identifiers Fewer bits fewer wasted bits per
data bit, but high collision rate vs.
More bits less waste due to ID
collisions but
many bits wasted on headers
Data Size16 bits
12Exploit Non-Uniformities ITiered Architecture
- Consider a memory hierarchy registers, cache,
main memory, swap space on disk - Due to locality, provides the illusion of a flat
memory that has speed of registers but size
price of disk space - Similar goal in sensor nets we want a spectrum
of hardware within a network with the illusion of - CPU/memory, range, scaling properties of large
nodes - Price, numbers, power consumption, proximity to
physical phenomena of the smallest
13Exploit Non-Uniformities ITiered Architecture
- We are implementing a sensor net hierarchy
PC-104s, tags, motes, ephemeral one-shot sensors - Save energy by
- Running the lower power and more numerous nodes
at higher duty cycles than larger ones - Having low-power pre-processors activate higher
power nodes or components (Sensoria approach) - Components within a node can be tiered too
- Our tags are a stack of loosely coupled boards
- Interrupts active high-energy assets only on
demand
14Exploit Non-Uniformities IITime Synchronization
- Time sync is critical at many layers some affect
energy use/system lifetime - TDMA guard bands
- Data aggregation caching
- Localization
- But time sync needs are non-uniform
- Precision
- Lifetime
- Scope Availability
- Cost and form factor
- No single method optimal on all axes
15Exploit Non-Uniformities IITime Synchronization
- Use multiple modes
- Post-facto synchronization pulse
- NTP
- GPS, WWVB
- Relative time chaining
- Combinations can (?) be necessary and sufficient,
to minimize resource waste - Dont spend energy to get better sync than app
needs - Work in progress
16Conclusions
- Many promising building blocks exist, but
- Long-lived often means highly vertically
integrated and application-specific - Traditional layering often not possible
- Challenge is creating reusable components common
across systems - Create general-purpose tools for building
networks, not general purpose networks