A THEORETICAL SCHEDULING TOOLBOX Adam Wierman - PowerPoint PPT Presentation

1 / 57
About This Presentation
Title:

A THEORETICAL SCHEDULING TOOLBOX Adam Wierman

Description:

SIZAL FAIRNESS. jobs of different sizes should receive 'proportional' performance. Efficiency ... SIZAL. FAIRNESS. What about higher moments? How predictable ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 58
Provided by: scs72
Learn more at: http://www.cs.cmu.edu
Category:

less

Transcript and Presenter's Notes

Title: A THEORETICAL SCHEDULING TOOLBOX Adam Wierman


1
A THEORETICAL SCHEDULINGTOOLBOXAdam Wierman
2
SCHEDULING IS EVERYWHERE
disksrouters databases web servers
3
SCHEDULING HASDRAMATIC IMPACT
M/GI/1 Queue
20
mean response time
10
AND ITS FREE!
SRPT (optimal)
0
load
0
1
4
MANY APPLICATIONS ? MANY METRICS
Schedulingbandwidth atweb servers
  • small response times
  • fairness to flows
  • predictable service

5
MANY APPLICATIONS ? MANY METRICS
Schedulingbandwidth atweb servers
  • small response times
  • fairness to flows
  • predictable service

6
MANY METRICS ? MANY POLICIES
7
WHICH POLICY?
  • Metrics of interest
  • Metric 1
  • Metric 2
  • Metric 3

8
PRACTITIONERSNEED
simple heuristics and mechanisms to apply in
building application- specific policies
good performance for awide range of metrics
9
A NEW APPROACH
  • Group policies based on

10
RS
Remaining size based
SRPT
DPS
LRPT
A NEW APPROACH
  • Group policies based on

FSP
PSJF
PS
SJF
LCFS
PLJF
FCFS
LJF
PLCFS
LAS
11
Remaining size based
Age based
A NEW APPROACH
  • Group policies based on

Preemptive size based
Non-preememptive
Time Sharing
12
Bias towards small jobs
A NEW APPROACH
  • Group policies based on

Bias towards large jobs
13
EFFICIENCY METRICSFAIRNESS
METRICSROBUSTNESS METRICS
measure overall system performance
A NEW APPROACH
  • Group policies based on
  • Define new metrics

largely undefined
14
EFFICIENCY METRICSFAIRNESS
METRICSROBUSTNESS METRICS
measure overall system performance
A NEW APPROACH
  • Group policies based on
  • Define new metrics

compare the relative performance of different
types of jobs
measure performance in the face of exceptional
inputs and situations
15
A NEW APPROACH
  • Group policies based on
  • Define new metrics
  • Classify groups on metrics

16
I PROPOSEA TOOLBOX OF CLASSIFICATIONS
  • Metrics of interest
  • Metric 1
  • Metric 2
  • Metric 3

Simple guidelines for building a policy that
performs well on Metrics 1,2,3.
17
I PROPOSEA TOOLBOX OF CLASSIFICATIONS
  • CLASS PROPERTIES
  • Any type T policy
  • will be unfair.
  • IMPOSSIBILITY
  • RESULTS
  • No type T policycan be both fair
  • and efficient.

18
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
19
EFFICIENCY METRICS
measure the overall system performance
  • mean response time
  • variance of response time
  • tail of response times
  • weighted response time

20
SIMPLE HEURISTIC
  • Definition A work conserving
  • policy P is SMART if
  • a job of remaining size greater than x can never
    have priority over a job of original size x.
  • a job being run at the server can only be
    preempted by new arrivals.

bias towards small jobs
Sigmetrics 2005a
21
THEOREM
In an M/GI/1 system, for any SMART policy P
Sigmetrics 2005a
22
M/GI/1 Queue
FCFS
Sigmetrics 2005a
23
OTHER EFFICIENCYMETRICS
Are SMART policies near optimal for i. variance
of response time ii. tail of response time
distribution iii. expected slowdown
SMART?
SRPT
(Queija, Borst, Boxma, Zwart, and others)
Proposed work
24
Proposed work
25
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
26
FAIRNESS METRICS
compare the relative performance for different
types of jobs
27
temporal
sizal
stream-based
1980s
2003
2005
2001
28
iTunes
29
box office
30
super- market
31
SIZAL FAIRNESS
jobs of different sizes should receive
proportional performance
iTunes
32
WHAT IS FAIR?
delay
job size
Everyone waits the same amount
33
SIZALFAIRNESS
A policy P is s-fair if ES(x)P 1/(1-?)for
all x. Otherwise, P is s-unfair.
Metric ES(x)P ET(x)P / x 1/x is the
correct factor for normalization because for all
P, ET(x)P ?(x)
  • Criterion 1 / (1-?)
  • ES(x)PS 1/(1-?)
  • - minP maxx ES(x)P 1/(1-?)
  • for unbounded distributions
  • - differentiates between distinct
  • functional behaviors

Perf Eval 2002 Sigmetrics 2003
34
SIZALFAIRNESS
A policy P is s-fair if ES(x)P 1/(1-?)for
all x. Otherwise, P is s-unfair.
Sigmetrics 2003
35
Always S-Unfair
SMART
Always S-Fair
RS
Sometimes S-Fair
FSP
Sigmetrics 2003
36
KEY PROOF IDEA
Theorem Any preemptive, size based policy, P,
is Always s-Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority
The lowest priority job is treated unfairly
Sigmetrics 2003
37
KEY PROOF IDEA
Theorem Any preemptive, size based policy, P,
is Always s-Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority
There is no lowest priority job, so look at
the infinite job size
Sigmetrics 2003
38
KEY PROOF IDEA
Theorem Any preemptive, size based policy, P,
is Always s-Unfair.
Case 1 A finite size, y, receives lowest
priority Case 2 No finite size receives the
lowest priority
1/(1-p) PSJF
ES(x)
x
0
This hump appears under many policies
Sigmetrics 2003
39
SIZALFAIRNESS
A policy P is s-fair if ET(x)P/x 1/(1-?)for
all x. Otherwise, P is s-unfair.
40
Variance
What is the right metric for comparing variability
across job sizes?
normalized variance
What should g(x) be?
41
A policy P is predictable if for all x.
Otherwise, P is unpredictable.
Variance
What is the right metric for comparing variability
across job sizes?
Metric VarT(x)P / x - VarT(x)P ?(x) for
common preemptive policies and VarT(x)P
O(x) for all policies.
  • Criterion ?EX2 / (1-?)3
  • differentiates between distinct
  • functional behaviors
  • we conjecture that
  • minP maxx VarT(x)P/x is
  • ?EX2 / (1-?)3

Sigmetrics 2005b
42
Always Unpredictable
Always Predictable
Sometimes Predictable
Sigmetrics 2005b
43
HIGHERMOMENTS
What is the right metric for comparing higher
moments across job sizes?
Perf. Eval. 2002 Sigmetrics 2005b
44
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
45
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
M/GI/1 Preempt-Resume
46
M/GI/1 PREEMPT-RESUME
Systems tend to have a limited number of priority
classes
Current work
47
M/GI/1 PREEMPT-RESUME
Many real systems depend on multiple servers.
QUESTA 2005 Perf Eval 2005
48
M/GI/1 PREEMPT-RESUME
Poisson arrivals can be unrealistic
Correlations between arrivals and
completions (open model vs. closed model)
Bursts of arrivals (batch arrivals)
Proposed
Under Submission
49
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
Routers
Web servers
50
WEB SERVERS
need to schedule bandwidth to requests for files
  • Suggested Policies
  • PS
  • GPS variants
  • SRPT
  • SRPT-hybrids
  • FSP
  • many others
  • Harchol-Balter,
  • Schroeder, Rawat,
  • Kshemkalyani,
  • many others

51
ROUTERS
need to service to flows
input queues
  • Suggested Policies
  • FCFS
  • PS
  • GPS variants
  • LAS
  • LAS-hybrids
  • many others
  • Biersack, Rai,
  • Urvoy-Keller,
  • Bonald, Proutiere,
  • many others

classifier
incoming packets
transmit queue
52
WEB SERVERS and ROUTERS
Identifykey metrics
Determine appropriate heuristics
Compare with current approaches
53
OUTLINE
2
Efficiency
Practical Generalizations
5
Fairness
Introduction
3
1
Real-world Case Studies
6
Robustness
4
54
A NEW APPROACH
  • Group policies based on
  • Define new metrics
  • Classify groups on metrics

55
Determine appropriate heuristics
Identifykey metrics
56
TIMELINE
To this point Spring/Summer 2005 Fall 2005/Winter
2006 Spring/Summer 2006
57
A THEORETICAL SCHEDULINGTOOLBOXAdam Wierman
  • Wierman, Harchol-Balter. Insensitive bounds on
    SMART scheduling. Sigmetrics 2005.
  • Harchol-Balter, Sigman, Wierman. Understanding
    the slowdown of large jobs in an M/GI/1 system.
    Perf. Eval. 2002.
  • Wierman, Harchol-Balter. Classifying scheduling
    policies with respect to unfairness in an
    M/GI/1. Sigmetrics 2003.
  • Wierman, Harchol-Balter. Classifying scheduling
    policies with respect to higher moments of
    response time. Sigmetrics 2005.
  • Harchol-Balter, Osogami, Scheller-Wolf, Wierman.
    Analysis of M/PH/k queues with m priority
    classes. QUESTA (to appear).
  • Wierman, Osogami, Harchol-Balter, Scheller-Wolf.
    How many servers are best in a dual priority
    FCFS system. Submitted to Perf. Eval.
  • Schroeder, Wierman, Harchol-Balter. "Closed
    versus open system models Understanding their
    impact on performance evaluation and system
    design." Under submission.

http//www.cs.cmu.edu/acw/thesis/
Write a Comment
User Comments (0)
About PowerShow.com