MediaNet: User-defined Adaptive Scheduling for Streaming Data - PowerPoint PPT Presentation

1 / 30
About This Presentation
Title:

MediaNet: User-defined Adaptive Scheduling for Streaming Data

Description:

MediaNet: User-defined Adaptive Scheduling for Streaming Data. Michael Hicks. Adithya Nagarajan ... Low disruption to stream. Experiments ... – PowerPoint PPT presentation

Number of Views:34
Avg rating:3.0/5.0
Slides: 31
Provided by: michae391
Learn more at: http://www.cs.umd.edu
Category:

less

Transcript and Presenter's Notes

Title: MediaNet: User-defined Adaptive Scheduling for Streaming Data


1
MediaNet User-defined Adaptive Scheduling for
Streaming Data
  • Michael Hicks
  • Adithya Nagarajan
  • University of Maryland

Robbert van Renesse Cornell University
2
Motivation
  • Multi-user streaming data applications
  • Sources
  • Sensor reports
  • Movies
  • Live video or audio
  • Weather reports
  • Stock quotes
  • Support Quality of Service (QoS)
  • Application-specific metrics
  • Fair and efficient sharing of resources
  • Situations
  • Military
  • Disaster
  • Home network

3
MediaNet
  • Adaptive
  • Schedules flows using available resources
  • Adapts to loads not under its control
  • User-directed
  • Adaptations are directed by users, based on
    relative preferences and user priority
  • Comprehensive
  • Global view of the network, accounting for
    overall network utilization, and per-user utility

4
MediaNet Architecture
Video source
Users desired stream adaptation prefs
schedules
Global scheduling service
Video player
subscribe
feedback
Video description, location, resource info
publish
5
Streaming Computations
  • Continuous Media Network (CMN)
  • Directed acyclic graph of operations
  • frame droppers, transcoders, compressors and
    decompressors, filters, aggregators, etc.
  • User specification
  • One or more CMNs, each with associated utility
    value
  • Goal Maximize users utility while utilizing the
    network efficiently

6
Operations
Op
frm1


frmn
Interval
Frame size
  • Other attributes
  • Fixed location?
  • Transitions only?

7
Example User Specification
Utility
CMN
Vid
Prio
User
1.0
pcS
pcD
Prio
Vid
Drop B
Prio
User
0.3
pcS
pcD
Vid
Prio
Drop PB
Prio
User
0.1
pcS
pcD
8
Global Scheduling Service
  • Schedules each user specification on the actual
    network
  • Locates each operation on a node
  • Inserts send and receive operations between
    nodes can have varying transport attributes
  • Scheduling choices based on current network
    resources (i.e. operates on-line)

9
Global Scheduling Algorithm
  • Simulated annealing technique
  • Cost function for network configurations
  • Maximize minimum utility for all users, plus
  • Optimize individual user utilities above minimum
  • Use best-cost configuration at these utilities
  • Cost function
  • Relates resource cost of a configuration to the
    total resources available (CPU, bandwidth)

10
Creating Configurations
  • Gather all user specs at utility u, combine them,
    and partition into distinct trees
  • For each tree
  • Calculate network shortest paths
  • Actually, most bandwidth-plentiful paths
  • Find best placement of operations to nodes

11
Example Creating Config
User2
Vid1
Prio
User1
Vid2
Prio
pc1
pc2
pc5
pc4
Vid1
Prio
Vid2
Prio
User4
User3
pc1
pc2
pc7
pc8
Vid1
Prio
User5
5 user specifications, utility 1.0
pc1
pc8
12
Example Creating Config
Vid1
Prio
User1
Vid2
Prio
User2
pc1
pc2
pc5
pc4
User3
User4
pc7
pc8
User5
Combining the specs
pc8
13
Example Scheduling
pc1
pc5
pc7
300
300
300
150
pc3
pc4
300
150
300
pc2
pc6
pc8
300
1. Initial network conditions
14
Example Scheduling
pc1
pc5
pc7
v1
u1
u3
300
300
300
150
pc3
pc4
p
p
300
150
300
pc2
pc6
pc8
300
u5
2. Scheduling the 1st tree
15
Example Scheduling
pc1
pc5
pc7
v1
u1
u3
150
150
150
0
pc3
pc4
p
p
300
150
150
pc2
pc6
pc8
300
u5
3. Adjusting edge weights
16
Example Scheduling
pc1
pc5
pc7
v1
u1
u3
150
150
150
0
pc3
pc4
p
p
300
150
150
pc2
pc6
pc8
300
u2
p
v2
u5
u4
4. Scheduling the 2nd tree
17
Prototype Implementation
  • Global scheduling service
  • implemented on a single node
  • eventually hierarchical
  • Local, per-node schedulers
  • monitor and report available bandwidth
    eventually CPU memory usage
  • implement local CMNs
  • Global scheduler reconfigures schedules on-line

18
Local Scheduling
  • Implement the CMN given by the GS
  • Must correctly reconfigure on-line
  • Report monitoring info back to GS
  • Implemented in Cyclone
  • Type-safe, C-like language
  • One component per operation, dynamically
    reconfigurable
  • Current uses TCP for send/receive
  • Other transports possible/useful

19
Monitoring
  • Monitor available bandwidth
  • Keep track of TCP throughput, attempted vs.
    actual bandwidth
  • Report to global scheduler when
  • attempted ? actual bandwidth (i.e. at peak)
  • after a regular timeout
  • Too pessimistic
  • creep bandwidth estimate additively to
    optimistically attempt higher utility configs

20
On-line Reconfiguration
  • New configuration is applied in parallel with the
    current configuration
  • Old operations are flushed along the dataflow
    path
  • New operations are enabled when all old ones are
    flushed on a particular node
  • Challenges
  • Rapid reconfiguration
  • Low disruption to stream

21
Experiments
  • Conducted on 8 850 MHz PIIIs, 512 MB RAM, 100
    Mb/s Ethernet, RedHat Linux 7.1 using a bowtie
    topology

MPEG clip Bandwidth Requirements Bandwidth Requirements Bandwidth Requirements
Frame rate IPB IP IB
30 fps 144 KB/s 88 KB/s 28 KB/s
22
Single-flow Performance Comparison
No adaptation
Priority-based frame dropping
Local, proactive frame dropping
MediaNet
23
Multi-User Performance
24
Multi-User Performance
25
Multi-User Performance
B frame dropping op
26
Multi-User Performance
B frame dropping op
27
Related Work
  • Layered Multicast
  • In-network stream processors
  • MEGA, Active Nets
  • Flow planning systems
  • Ninja, Darwin, CANS, End-to-End Paths, Conductor,
    PATHS, Choi et al.
  • Reservation-based QoS
  • OMEGA

28
Future Work
  • Hierarchy of global schedulers
  • Better scalability
  • Scaling user utilities
  • Based on user priority and resource usage
  • Enforces fairness
  • Better on-line monitoring
  • More experiments
  • Real wireless
  • Simulation for larger scenarios

29
Conclusions
  • Application-specific QoS via user specs
  • High network utilization and per-user utility via
    global scheduling
  • Share resources between flows in a multicast-like
    manner, but generalized to CMNs
  • Utilize multiple, redundant paths
  • Intelligently place operations to reduce network
    utilization
  • Adapts to resource availabilities on-line

30
For More Information
  • Papers, Software at
  • http//www.cs.umd.edu/projects/medianet
Write a Comment
User Comments (0)
About PowerShow.com