Flexible Network Striping - PowerPoint PPT Presentation

About This Presentation
Title:

Flexible Network Striping

Description:

Host A. HordeB. HordeA. ApplicationA. ApplicationB 'Tavarua' 'Tavarua' 'video' 'audio' 'data' Host B. API: Data Send Timeline. Host A: send data ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 27
Provided by: Asfan
Learn more at: http://nms.lcs.mit.edu
Category:

less

Transcript and Presenter's Notes

Title: Flexible Network Striping


1
Flexible Network Striping
  • Asfandyar Qureshi

RQE 21st August 2006
2
Medical Emergencies
  • Emergency
  • 911 call
  • Ambulance arrives
  • EMTs initial examination
  • Transport to hospital
  • Doctors begin treatment

Due to the EMTs lack of medical training,
doctors do not believe they can depend on EMTs
for much beyond speedy transport.
3
Mobile Telemedicine
  • Remote diagnosis and treatment
  • Provide advanced remote diagnostics capabilities
  • Allow doctors to examine patients in-transit
  • What we want to send
  • Unidirectional VIDEO (600 kbit/sec)
  • Bidirectional AUDIO (30 kbit/sec)
  • Low-rate Physiological data (EKG, Heart-rate,
    etc)

4
System Deployment
  • Plan to deploy the system in order to conduct
    multiple medical studies
  • Initial deployment Spring 07
  • Project partners
  • Boston Massachusetts General Hospital
  • Orlando Orange County Floridas Emergency
    Medical Services
  • Economic constraints
  • Limited/progressive deployment must be easy
  • Initial medical study two ambulances
  • Cannot amortize the cost over many ambulances
  • Use existing wireless infrastructure

5
Talk Structure
  • Motivation
  • Wireless Wide Area Networks (WWANs)
  • Performance overview
  • Horde network striping middleware
  • Network striping
  • Horde API
  • Horde internals
  • Channel managers and transmission slots
  • Packet scheduling

6
WWAN Performance Overview
  • CDMA EV-DO interfaces
  • US providers Verizon and Sprint
  • Low throughput
  • Download lt 400 kbits/sec
  • Upload lt 150 kbits/sec
  • High and variable packet RTTs
  • 500 ms 200 ms
  • Realistic WWAN performance data is hard to come
    by
  • Lots of hype/disinformation

7
WWAN Experiments
  • We spent some time running WWAN experiments in
    Orlando and Boston
  • Drove around in a car
  • Simultaneously used multiple interfaces
  • Measured UDP throughput and RTTs
  • Used GPS to derive coverage maps
  • Also logged 802.11 APs
  • Not a rigorous analysis
  • Nonetheless, provided a great deal of insight
    into WWAN behaviour

8
Throughput
AGGREGATE mean 306 kbps,
stdev 64 kbps
SPRINT mean 111 kbps,
stdev 36 kbps
VERIZON-1 mean 109 kbps,
stdev 21 kbps
VERIZON-2 mean 91 kbps, stdev
32 kbps
9
High and variable RTTs
Stationary pings (Boston)
10
Network Striping
  • Stripe Application Data across Multiple Network
    Channels
  • Simultaneously use many networks
  • Wireless networks can be dissimilar and unstable

A
B
N network channels
M data streams
M data streams
11
Striping Related Work
  • Wired networks
  • e.g., n-ISDN lines as a virtual T1 line
  • SIGCOMM 96 A Reliable and Scalable Striping
    Protocol, ADISESHU, PARULKAR, AND VARGHESE.
  • Mostly concerned with getting TCP to run well.
  • Wireless networks
  • Globecom 99 Adaptive Inverse Multiplexing for
    Wide-Area Wireless Networks, SNOEREN.
  • Mobisys 2004 MAR A Commuter Router
    Infrastructure for the Mobile Internet,
    RODRIGUEZ, CHAKRAVORTY, CHESTERFIELD, PRATT, AND
    BANERJEE.
  • Multi-path video streaming
  • 2001 Unbalanced Multiple Description Video
    Communication using Path Diversity,
    APOSTOLOPOULOS AND WEE.

12
Challenges
  • Network
  • Network channels can be quite dissimilar
  • Channel QoS varies significantly in time
  • Number of channels varies
  • Application
  • Bandwidth limited
  • Different types of streams with dissimilar
    network QoS sensitivities
  • Want applications to be independent of the number
    and types of channels

13
Our approach Horde
  • Network striping middleware
  • Exposes striping operation to applications
  • Apps can abstractly modulate striping policy
  • Flexible striping mechanism
  • Per-channel congestion control
  • Does not support unmodified legacy applications
  • Horde is targeted at developers of new mobile
    applications
  • Legacy support is relatively easy to provide
  • Virtual TUN interfaces
  • TESLA (Salz et al, Usenix 03)

14
Forward Compatibility
  • Applications do not have a short shelf-life
  • Depend on Hordes abstract API
  • The modular design of the middleware, allows our
    applications to be forward compatible
  • Networks are likely to evolve.
  • Sprint moving to WiMax
  • Verizon moving to EV-DO rev A
  • Our middleware is designed to simultaneously
    handle many different types of networks and its
    functionality can be extended easily.

15
Horde API Sessions
  • Each host runs a local Horde daemon
  • Exposes a RPC-like interface to applications
  • Before any data is sent, peering applications
    must negotiate a named session.
  • Multiple sessions (unique names) are allowed

Tavarua
Tavarua
ApplicationA
ApplicationB
N network channels
HordeB
HordeA
Host B
Host A
16
Horde API Streams ADUs
  • Within a session, one or more streams can be
    negotiated.
  • Streams are (relaxed) bi-directional FIFOs of
    Application Data Units (ADUs).
  • Maximum reordering ? (maximum number of channels)
  • ADUs can be fragmented, partially lost, etc.

Tavarua
Tavarua
video
audio
ApplicationA
ApplicationB
data
HordeB
HordeA
Host B
Host A
17
API Data Send Timeline
  • Host A send data
  • App requests that Horde send an ADU
  • Data is scheduled (ADU fragments ? packets)
  • One or more ADU fragments per packet
  • Host B receive data
  • Data is unpacked (packets ? ADU fragments)
  • App notified about received fragments
  • ACKs sent for received packets
  • Host A receive ACKs
  • ACKs mapped to ADU fragment info
  • Losses detected from ACK stream
  • App notified about ACKd or Lost ADU fragments

18
Horde API QoS Objectives
  • ADUs can be tagged with Quality-of-service
    objectives
  • Tags are hints to packet scheduler
  • An abstract way to modulate the packet scheduling
    policy
  • Example tags
  • ADU priority
  • Correlation group
  • Minimize the joint loss probability P(X lost ? Y
    lost), if the two ADUs X and Y are in the same
    correlation group.
  • Elements in the same correlation group are less
    likely to experience correlated failures.
  • E.g., President and Vice president would have the
    same group.

19
Horde Internals
Applications
APPLICATION
IPC
API
ADUs
Session
Packet Scheduler (iMux)
Horde Daemon
MUX
ctrl
Network Channel Management
Packets
O/S NETWORK SERVICES
20
Channel Managers
Packet Scheduler
Network Management Layer
M1
M2
MN
O/S NETWORK SERVICES
  • A single channel manager instance for each active
    network interface
  • Pool manager creates and destroys the Mi
  • The Mi implement congestion control
  • Multiple implementations of the channel manager
    interface
  • Pool manager chooses one based on per-interface
    settings

21
Channel Managers
  • Primary services provided by MX
  • Throttle packet sends on interface X.
  • Generate ACK and LOSS notifications for packets
    sent on X.
  • Each MX runs a congestion control session
  • Congestion control logic belongs below the
    striping layer
  • Multiple independent congestion domains, need
    multiple independent sessions.
  • Channel-specific optimizations possible
  • Pick congestion control algorithm based on the
    channel type (e.g., 802.11, EV-DO, WiMax)
  • Implementations AIMD, CBR, AIMD_EVDO, DCCP,

22
Transmission Slots
  • TxSlot objects are the interface between the
    scheduler and the network layer
  • For the scheduler, each channel manager is a
    source of TxSlots.
  • Each TxSlot provides a packet TX capability.
  • When a slot becomes available, the scheduler maps
    data to that slot.
  • A TxSlot provides expected QoS for that TX
  • E.g., expected RTT and expected loss probability.
  • The scheduler can use the expected QoS fields to
    determine the best data for that slot.

23
TxSlot Life Cycle
ADU
schedule_tx_slot() TxSlot?Data mapping
ADU
ADU
One or more ADU fragments packed together
Packet Scheduler
Ti
Managerk
generate_tx_slot() TxSlot allocation
consume_tx_slot() TxSlot deallocation
Packet
24
Hoard Scheduler
schedule_tx_slot(slot) streams set of
streams with unsent data while (slot not full)
and (streams not empty) stream highest
priority from streams adu ADU at the
head of stream f largest fragment that
will fit in slot if test_constraints(slot,
f) is okay read f from adu into slot
if (no more unsent data in adu) pop
adu from stream if (stream empty)
remove from streams if (slot has data)
consume_tx_slot(slot)
25
Currently Supported Objectives
  • Optimization
  • Static ADU priority
  • Constraints
  • Stream FIFO ordering
  • ADU loss probability threshold
  • ADU latency threshold
  • Stream latency variance threshold
  • ADU correlation groups
  • Other
  • Resilient ADU sends

26
Conclusion
  • Mobile telemedicine deployment
  • Horde is a big part of a system that is being
    deployed soon.
  • Will allow doctors to try things they wouldnt
    have been able to try for another 5-10 years.
  • Support the development of demanding mobile
    applications
  • Transparently exploit all available wireless
    resources.
  • Powerful abstractions (internal and API)
  • Objectives, transmission slots, etc.
  • Allowed us to modularly experiment with different
    congestion control schemes and scheduling
    strategies.
Write a Comment
User Comments (0)
About PowerShow.com