Flexible Power Scheduling for wireless sensor networks PowerPoint PPT Presentation

presentation player overlay
About This Presentation
Transcript and Presenter's Notes

Title: Flexible Power Scheduling for wireless sensor networks


1
Flexible Power Scheduling for wireless sensor
networks
  • Barbara Hohlt
  • Eric Brewer
  • UC Berkeley
  • CENTS Retreat
  • January 2005

2
Flexible Power Scheduling
  • Flexible Power Scheduling
  • Reduces radio power consumption
  • Supports fluctuating demand (multiple queries)
  • Adaptive and decentralized schedules
  • Improved power savings
  • 4.3X over TinyDB duty cycling
  • 2 X over GDI low-power listening
  • High Yield gt Robustness
  • Reduced contention
  • Increased end-to-end fairness and throughput
  • Optimized per hop latency

3
Outline
  • Introduction
  • Communication Scheduling Trends
  • FPS Overview
  • Implementation and Highlights
  • Micro Benchmarks
  • Application Evaluation

4
Wireless sensor networks
  • Lifetime constrained by limited energy stores
  • Communication can dominant the energy cost
  • Turning the radio off during idle times reduces
    power consumption
  • Flexible Power Scheduling
  • Adaptively schedules nodes to save radio power
  • Decentralized
  • Multihop sense-to-gateway applications

5
Other Approaches
Protocol Layer
Approach
TinyDB Duty Cycling Application
S-MAC Scheduled Listening MAC
Flexible Power Scheduling Network
6
TinyDB Duty Cycling
waking period
epoch
  • All nodes sleep and wake at same time every epoch
  • Fixed per deployment
  • Supports a tree topology

7
S-MAC Scheduled Listening
listen period
sleep period
SYN
RTS
CTS
sleep or send data
frame
  • Each node maintains listen schedules of
    neighborhoods
  • Data transmitted during normal sleep time,
    otherwise radios turned off
  • Fixed listen/sleep periods per deployment
  • Supports general communication

8
Flexible Power Scheduling
cycles
  • Each node has a local schedule
  • During Idle time slots the radio is turned off
  • Schedules adapt continuously over time
  • Supports a tree topology

9
Review
  • S-MAC - MAC layer
  • Duty cycles the radio
  • Schedules messages
  • FPS - Network layer
  • Duty cycles the radio
  • Schedules traffic flows
  • Adaptive
  • TinyDB - Application layer
  • Duty cycles entire node (mcu,radio,sensors)
  • Schedules entire network

10
FPS OVERVIEW
11
Assumptions
  • Sense-to-gateway applications
  • Multihop network
  • Majority of traffic is periodic
  • Nodes are sleeping most of the time
  • Available bandwidth gtgt traffic demand
  • Routing component

12
FPS Two-Level Architecture
  • Coarse-grain scheduling
  • At the network layer
  • Planned radio on-off times
  • Fine-grain CSMA MAC underneath
  • Reduces contention and increases end-to-end
    fairness
  • Distributes traffic
  • Decouples correlated traffic
  • Reserve bandwidth from source to sink
  • Does not require perfect schedules or precise
    time synchronization

13
Scheduling flows
  • Schedule entire flows (not packets)
  • Make reservations based on traffic demand
  • Bandwidth is reserved from source to sink
  • Reservations remain in effect indefinitely and
    can adapt over time

14
The power schedule
  • Time is divided into cycles
  • Each cycle is divided into slots
  • Each node maintains a local power schedule of
    what operations it performs over a cycle

15
Adaptive Scheduling
Local data structure
local state
supply
demand
  • Demand represents how many messages a node seeks
    to forward each cycle
  • Supply is reserved bandwidth
  • The network keeps some preallocated bandwidth in
    reserve
  • Changes percolate up the network tree

16
Supply and Demand
supply
demand
cycle
  1. If supply lt demand
  2. Request reservation
  3. If Ack -gt Increment supply
  4. If supply gt demand
  5. Offer reservation
  6. If Req -gtIncrement demand

1 0 1
2 1 1
3 1 2
For the purposes of this example, we will say one
unit of demand counts as one message per cycle.
17
Reduced Latency
Sliding Reservation Window
Using only local information, the next Receive
slot is always within w of the next Transmit slot
putting an upper bound on the per hop latency of
the network.
supply
demand
0 1
1 1
1 2
18
Receiver Initiated Scheduling
FPS Joining Protocol
Tx Broadcast
Rx
Tx
Receiver
CONF
REQ
ADV
Sender
Listen
Tx
Rx
  • Periodically advertise available bandwidth
  • When a node wants to join it listens for
    advertisements and sends a request
  • Thereafter it can increase/decrease its demand
    during this scheduled time slot

19
Properties of supply/demand
  • All topology changes cast as demand
  • Joining
  • Failure
  • Lossy link
  • Multiple queries
  • Mobility
  • 3 classes of node
  • Router and application
  • Router only
  • Application only
  • Load balancing

20
IMPLEMENTATION
21
Implementation
  • HW
  • Mica
  • Mica2Dot
  • Mica2
  • SW
  • Slackers
  • TinyDB/FPS
  • GDI/FPS

22
Architecture
  • Schedules traffic flows
  • Coordinates with routing and application layers

23
Micro Benchmarks Mica
  • Power Consumption
  • Fairness and Yield
  • Contention

24
Slackers. Early experiment on Mica. 5X savings
Current in mA
Avg
1.4
Time in seconds
25
End-to-end Fairness and Yield
FPS
Naive
AVG STDDEV Max/Min
FPS 96.4 1.13 1.03
Naïve 24.7 6.19 2.48
26
Contention is Reduced
27
Application Evaluation
  • TinyDB/fps vs TinyDB/duty cycling
  • 4.3X power savings
  • Multiple queries
  • Partial flows
  • Query dissemination
  • Aggregation
  • Previous talk
  • GDI/fps vs GDI/lpl
  • 2X power savings
  • 20 increase in yield

28
GDI/fps (mica2)
cpu on 4.28 mA cpu off 0.71 mA
Current in mA
Time in seconds
29
direct comparison
FPS results from in-lab experiments. LPL results
reported from real deployment. The bars indicate
varying sample rates of 30, 60, 300 1200 seconds.
30
direct comparison
FPS results from in-lab experiments. LPL results
reported from real deployment. The bars indicate
varying sample rates of 30, 60, 300 1200 seconds.
31
Summary
  • Flexible Power Scheduling
  • Two-level architecture
  • Schedules flows (not packets)
  • Adaptive and decentralized schedules
  • High Yield
  • Reduced contention
  • Increased end-to-end fairness and throughput
  • Reduced end-to-end latency
  • Supports multiple queries (fluctuating demand)
  • Improved power savings
  • 4.3X over TinyDB duty cycling
  • 2 X over GDI low-power listening

32
Thank You
Barbara Hohlt hohltb_at_cs.berkeley.edu
33
END
34
Radio Current
RFM 916 MHz Max Units
Tx Power 12 mA 0.75 mW
Rx 4.8 mA
Chipcon 433 MHz Typ Units
Tx Power 10.4 mA 1 mW
Rx 7.4 mA
RFM TR1000 Data Sheet
Chipcon CC1000 Data Sheet
Source
35
Network Adaptation
source
gateway
Node 3
The network response as Node 3 increases and
decreases its demand by 2.
36
Low-Power Listening
Each node wakes up periodically to sample the
channel for traffic and goes right back to sleep
if there is nothing to be received. The channel
is polled 10 times per second.
37
Receiver Initiated Minority Listens
Nodes that want to send are in the minority and
listen for invitations.
Sending nodes listen for invitations
Receiving nodes send invitations
38
TinyOS Network Stack
incoming messages
outgoing messages
Write a Comment
User Comments (0)
About PowerShow.com