Routing - PowerPoint PPT Presentation

About This Presentation
Title:

Routing

Description:

Routing patterns in WSN ... reception smoothes out and almost matches packet generation SEA Many packets are lost despite quick packet ... query) Data has attributes ... – PowerPoint PPT presentation

Number of Views:59
Avg rating:3.0/5.0
Slides: 66
Provided by: MuratDe5
Learn more at: https://cse.buffalo.edu
Category:

less

Transcript and Presenter's Notes

Title: Routing


1
Routing
  • Murat Demirbas
  • SUNY Buffalo

2
Routing patterns in WSN
  • Model static large-scale WSN
  • Convergecast Nodes forwards their data to
    basestation over multihops, scenario monitoring
    application
  • Broadcast Basestation pushes data to all nodes
    in WSN, scenario reprogramming
  • Data driven Nodes subscribe for data of interest
    to them, scenario operator queries the nearby
    nodes for some data (similar to querying)

3
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

4
Routing tree
  • Most commonly used approach is to induce a
    spanning tree over the network
  • The root is the base-station
  • Each node forwards data to its parent
  • In-network aggregation possible at intermediate
    nodes
  • Initial construction of the tree is problematic
  • Broadcast storm, remember Complex behavior at
    scale
  • Link status change non-deterministically
  • Snooping on nearby traffic to choose high-quality
    neighbors pays off
  • Taming the Underlying Challenges of Reliable
    Multihop Routing
  • Trees are problematic since a change somewhere in
    the tree might lead to escalating changes in the
    rest (or a deformed structure)

5
(No Transcript)
6
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

7
Grid Routing Protocol
  • The protocol is simple
  • it requires each mote to send only one
    three-byte msg every T seconds
  • This protocol is reliable
  • it can overcome random msg loss and mote failure
  • Routing on a grid is stateless
  • perturbed region upon failure of nodes is
    bounded by their local neighbors

8
The Logical Grid
  • The motes are named as if they form an MN
    logical grid
  • Each mote is named by a pair (i, j) where
  • i 0 .. M-1 and j 0 .. N-1
  • The network root is mote (0,0)
  • Physical connectivity between motes is a superset
    of their connectivity in the logical grid

(0,1)
(0,1)
(1,1)
(2,1)
(2,1)
(1,1)
(2,0)
(0,0)
(1,0)
(2,0)
(0,0)
(1,0)
9
Neighbors
  • Each mote (i, j) has
  • two low-neighbors (i-H, j) and (i, j-H)
  • two high-neighbors (iH, j) and (i, jH)
  • H is a positive integer called the tree hop
  • If a mote (i, j) receives a msg from any mote
    other than its low- and high-neighbors, (i, j)
    discards the msg

(i, jH)
(i, j)
(iH, j)
(i-H, j)
(i, j-H)
10
Communication Pattern
  • Each mote (i, j) can send msgs whose ultimate
    destination is mote (0, 0)
  • The motes need to maintain an incoming spanning
    tree whose root is (0, 0) each mote maintains a
    pointer to its parent
  • When a mote (i, j) has a msg, it forwards the msg
    to its parent. This continues until the msg
    reaches mote (0, 0).

(H 2)
11
Choosing the Parent
  • Usually, each mote (i, j) chooses one of its
    low-neighbors (i-H, j) or (i, j-H) to be its
    parent
  • If both its low-neighbors fail, then (i, j)
    chooses one of its high-neighbors (iH, j) or (i,
    jH) to be its parent. This is called inversion
  • Example there is one inversion at mote (2, 2)
    because the two low-neighbors of (2, 2) have
    failed.

failed
failed
(H 2)
12
Inversion Count
  • Each mote (i, j) maintains
  • the id (x, y) of its parent, and
  • the value c of its inversion count
  • the number of inversions that occur along
  • the tree from (i, j) to (0, 0)
  • Inversion count c has an upper value cmax
  • Example

failed
failed
(3,2), 1
(0,3), 0
failed
failed
(0,3), 0
(0,0), 0
failed
(0,0), 0
(H 2)
(0,1), 0
13
Protocol Message
  • If a mote (i, j) has a parent, then every T
    seconds it sends a msg with three fields
  • connected(i, j, c)
  • where c is the inversion count of mote (i,j)
  • Otherwise, mote (i, j) does nothing.
  • Every 3 seconds, mote (0, 0) sends a msg with
    three fields
  • connected(0, 0, 0)

14
Acquiring a Parent
  • Initially, every mote (i, j) has no parent.
  • When mote (i, j) has no parent and receives
    connected(x, y, e), (i, j) chooses (x, y) as its
    parent
  • if (x, y) is its low-neighbor, or
  • if (x, y) is its high-neighbor and e lt cmax
  • When mote (i, j) receives a connected(x, y, e)
    and chooses (x, y) to be its parent, (i, j)
    computes its inversion count c as
  • if (x, y) is low-neighbor, c e
  • if (x, y) is high-neighbor, c e 1

15
Keeping the Parent
  • If mote (i, j) has a parent (x, y) and
  • receives any connected(x, y, e)
  • then (i, j) updates its inversion count c as
  • if (x, y) is low-neighbor, c e
  • if (x, y) is high-neighbor and e lt cmax, c e
    1
  • if (x, y) is high-neighbor and e cmax, then (i,
    j) loses its parent

16
Losing the Parent
  • There are two scenarios that cause mote (i, j) to
    lose its parent (x, y)
  • (i, j) receives a connected(x, y, cmax) msg and
    (x, y) happens to be a high-neighbor of (i, j)
  • (i, j) does not receive any connected(x, y, e)
    msg for kT seconds

17
Replacing the Parent
  • If mote (i, j) has a parent (x, y), and receives
    a connected(u, v, f) msg where (u, v) is a
    neighbor of (i, j), and (i,j) detects that by
    adopting (u, v) as a parent and using f to
    compute its inversion count c, the value of c is
    reduced
  • then (i, j) adopts (u, v) as its parent and
    recomputes its inversion count

18
Allowing Long Links
  • Add the following rule to the previous rules for
    acquiring and replacing a parent
  • If any mote (i,j) ever receives a message
    connected(0,0,0), then mote (i,j) makes mote
    (0,0) its parent

19
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

20
Application context
  • A Line in the Sand (Lites)
  • field sensor network experiment for real-time
    target detection, classification, and tracking
  • A target can be detected by tens of nodes
  • Traffic burst
  • Bursty convergecast
  • Deliver traffic bursts to a base station nearby

21
Problem statement
  • Only 33.7 packets are delivered with the default
    TinyOS messaging stack
  • Unable to support precise event classification
  • Objectives
  • Close to 100 reliability
  • Close to optimal event goodput (real-time)
  • Experimental study for high fidelity

22
Network setup
  • Network
  • 49 MICA2s in a 7 X 7 grid
  • 5 feet separation
  • Power level 9 (for 2-hop reliable communication
    range)
  • Logical Grid Routing (LGR)
  • It uses reliable links
  • It spreads traffic uniformly

23
Traffic trace from Lites
  • Packets generated in a 7 X 7 subgrid, when a
    vehicle passes across the middle of the Lites
    network
  • Optimal event goodput
  • 6.66 packets/second

24
Retransmission based packet recovery
  • At each hop, retransmit a packet if the
    corresponding ACK is not received after a
    constant time
  • Synchronous explicit ack (SEA)
  • Explicit ACK immediately after packet reception
  • Shorter retransmission timer
  • Stop-and-wait implicit ack (SWIA)
  • Forwarded packet as an ACK
  • Longer retransmission timer

25
SEA
  • Retransmission does not help much, and may even
    decrease reliability and goodput
  • Similar observations when adjusting contention
    window of B-MAC and using S-MAC
  • Retransmission-incurred contention

Metrics RT 0 RT 1 RT 2
Reliability () 51.05 54.74 54.63
Delay (sec) 0.21 0.25 0.26
Goodput (pkt/sec) 4.01 4.05 3.63
26
SWIA
  • Again, retransmission does not help
  • Compared with SEA, longer delay and lower
    goodput/reliability
  • longer retransmission timer blocking flow
    control
  • More ACK losses, and thus more unnecessary
    retransmissions

Metrics RT 0 RT 1 RT 2
Reliability () 43.09 31.76 46.5
Delay (sec) 0.35 8.81 18.77
Goodput (pkt/sec) 3.48 2.58 1.41
27
Protocol RBC
  • Differentiated contention control
  • Reduce channel contention caused by packet
    retransmissions
  • Window-less block ACK
  • Non-blocking flow control
  • Reduce ack loss
  • Fine-grained tuning of retransmission timers

28
Window-less block ACK
  • Non-blocking window-less queue management
  • Unlike sliding-window based black ACK, in order
    packet delivery is not considered
  • Packets have been timestamped
  • For block ACK, sender and receiver maintain the
    order in which packets have been transmitted
  • order is identified without using
    sliding-window, thus there is no upper bound on
    the number of un-ACKed packet transmissions

29
Sender queue management
M max. of retransmissions
30
Differentiated contention control
  • Schedule channel access across nodes
  • Higher priority in channel access is given to
  • nodes having fresher packets
  • nodes having more queued packets

31
Implementation of contention control
  • The rank of a node j ?M - k, VQk, ID(j) ?,
    where
  • M maximum number retransmissions per-hop
  • VQk the highest-ranked non-empty virtual queue
    at j
  • ID(j) the ID of node j
  • A node with a larger rank value has higher
    priority
  • Neighboring nodes exchange their ranks
  • Lower ranked nodes leave the floor to higher
    ranked ones

32
Fine tuning retransmission timer
  • Timeout value tradeoff between
  • delay in necessary retransmissions
  • probability of unnecessary retransmissions
  • In RBC
  • Dynamically estimate ACK delay
  • Conservatively choose timeout value also
  • reset timers upon packet and ACK loss

33
Event-wise
Metrics RT 0 RT 1 RT 2
Reliability () 56.21 83.16 95.26
Delay (sec) 0.21 1.18 1.72
Goodput (pkt/sec) 4.28 5.72 6.37
  • Retransmission helps improve reliability and
    goodput
  • close to optimal goodput (6.37 vs. 6.66)
  • Compared with SWIA, delay is significantly
    reduced
  • 1.72 vs. 18.77 seconds

34
Distribution of packet generation and reception
  • RBC
  • Packet reception smoothes out and almost matches
    packet generation
  • SEA
  • Many packets are lost despite quick packet
    reception
  • SWIA
  • Significant delay and packet loss

35
Field deployment (http//www.cse.ohio-state.edu/ex
scal)
  • A Line in the Sand (Lites)
  • 100 MICA2s
  • 10 X 20 meter2 field
  • Sensors magnetometer, micro impulse radar (MIR)
  • ExScal
  • 1,000 XSMs, 200 Stargates
  • 288 X 1260 meter2 field
  • Sensors passive infrared radar (PIR), acoustic
    sensor, magnetometer

36
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

37
Flooding
  • Forward the message upon hearing it the first
    time
  • Leads to broadcast storm and loss of messages
  • Obvious optimizations are possible
  • The node sets a timer upon receiving the message
    first time
  • Might be based on RSSI
  • If, before the timer expires, the node hears
    message broadcasted T times, then node decides
    not to broadcast

38
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

39
Flooding, gossiping, flooding,
  • Flood a message upon first hearing a message
  • Gossiping periodically (less frequently) to
    ensure that there are no missed messages
  • Upon detecting a missed message disseminate by
    flooding again
  • Best effort flooding (fast) followed by a
    guaranteed coverage gossiping (slow) followed by
    best effort flooding
  • Algorithm takes care of delivery to loosely
    connected sections of the wsn

Livadas and Lynch, 2003
40
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

41
Trickle
  • See Phil Leviss talk.

42
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

43
Polygonal broadcasts
  • Imaginary polygonal tilings for supporting
    communication
  • E.g., 1-bit broadcast scheme for hexagonal tiling

Dolev, Herman, Lahiani, Brief announcement
polygonal broadcast, secret maturity and the
firing sensors, PODC 2004
44
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

45
Fire-cracker protocol
  • Firecracker uses a combination of routing and
    broadcasts to rapidly deliver a piece of data to
    every node in a network
  • To start dissemination, the data source sends
    data to distant points in the network
  • Once the data reaches its destinations,
    broadcast-based dissemination begins along the
    paths
  • By using an initial routing phase, Firecracker
    can disseminate at a faster rate than scalable
    broadcasts while sending fewer packets
  • The selection of points to route to has a large
    effect on performance.

46
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

47
Directed Diffusion
  • Protocol initiated by destination (through query)
  • Data has attributes sink broadcasts interests
  • Nodes diffuse the interest towards producers via
    a sequence of local interactions
  • Nodes receiving the broadcast set up a gradient
    (leading towards the sink)
  • Intermediate nodes opportunistically fuse
    interests, aggregate, correlate or cache data
  • Reinforcement and negative reinforcement used to
    converge to efficient distribution

48
Directed diffusion
Intanagonwiwat, Govindan and Estrin, Directed
diffusion a scalable and robust communication
paradigm for sensor networks 6th conf. on
Mobile computing and networking, 2000.
49
Directed Diffusion.
Directional Flooding
Interest
Gradient
Sink
Source
50
Directed Diffusion.
Interest
Gradient
Sink
Source
51
Directed Diffusion.
Interest
Gradient
Sink
Source
52
Directed Diffusion.
Gradient
Sink
Source
53
Directed Diffusion.
Reinforcement
Gradient
Sink
Source
54
Directed Diffusion.
Reinforcement
Gradient
Sink
Source
55
Directed Diffusion.
Reinforcement
Gradient
Sink
Source
56
Directed Diffusion.
Data
Gradient
Sink
Source
57
Directed Diffusion.
Data
Gradient
Sink
Source
58
Directed Diffusion robustness.
Data
Gradient
Sink
Source
59
Directed Diffusion.
Data
Gradient
Reinforcement
Sink
Source
60
Directed Diffusion.
Data
Gradient
Reinforcement
Sink
Source
61
Directed Diffusion.
Data
Gradient
Reinforcement
Sink
Source
62
Design considerations
63
Data Naming
  • Expressing an Interest
  • Using attribute-value pairs
  • E.g.,

Type Wheeled vehicle // detect vehicle
location Interval 20 ms // send events every
20ms Duration 10 s // Send for next 10 s Field
x1, y1, x2, y2 // from sensors in this area
64
Outline
  • Convergecast
  • Routing tree
  • Grid routing
  • Reliable bursty broadcast
  • Broadcast
  • Flood, Flood-Gossip-Flood
  • Trickle
  • Polygonal broadcast, Fire-cracker
  • Data driven
  • Directed diffusion
  • Rumor routing

65
Rumor routing
  • Deliver packets to events
  • query/configure/command
  • No global coordinate system
  • Algorithm
  • Event sends out agents which leave trails for
    routing info
  • Agents do random walk
  • If an agent crosses a path to another event, a
    path is established
  • Agent also optimizes paths if they find shorter
    ones

Braginsky, Estrin WSNA 2002
Write a Comment
User Comments (0)
About PowerShow.com