Title: Scheduling
1Scheduling
Scheduling
Recommended Reading Bacon, J. Concurrent
Systems (6) Stallings, W. Operating Systems
(9,10) Silberschatz, A. Operating System
Concepts (5) Tanenbaum, A. Modern Operating
Systems (2,12) Wettstein, H. Systemarchitektur
(9)
2Scheduling
Motivation Scheduling Problems Design
Parameters Single Processor
Scheduling Multi Processor
Scheduling Real Time Scheduling
3Scheduling
Motivation
In daily life we know time schedules, train
schedules etc., i.e. scheduling has to do with
planing some events (e.g. beatles reunion)
General schedule problem How to map executable
units onto executing units such that overall
system objectives or specific constraints are
met,
e.g. critical tasks/threads should meet their
deadlines, i.e. they must execute in time,
neither too early, nor too late.
Remark We focus only on the resource CPU.
4Scheduling
Examples of Scheduling Problems
- Different levels of scheduling
- Complete applications on a single or multiple
processor system, - e.g. which application may run when and how
long -
- Threads of an application or a system task on a
massive parallel machine, - e.g. which thread of the active task has to run
on which CPU - Sequence of operations of a thread on a
pipelined or super scalar CPU, - e.g. ...
Remark Each executable unit may need a different
scheduling policy
5Scheduling
Concrete Scheduling Problems
- In a multi-programming system some threads may
be ready. - Which of these ready threads should run next?
- In a single-user system you want to watch and
listen to a beatles - (Gerds favorite band) MPEG-video from MTV or
Viva via internet. - How to manage that network-software, decoding,
output to screen - and audio is well done even though in the
background you initiated - a compiling task of a non trivial concurrent
Pascal application. - Flying to the moon in a comfortable enterprise
in the year 2030 - may require some concurrent activities in your
board controller, i.e. - calculating a new course to avoid a crash with
an asteroid might - be more urgent than preparing and controlling
the next meal.
6Scheduling
Classical Scheduling Problems
?
How should we place these six threads on to the
three CPUs? Is there an optimal schedule? As
long as there is no objective we want to meet we
can neither produce an optimal nor a good
schedule!
7Scheduling
Classical Scheduling Problems
time
0
This is a quite good schedule if we want to
minimize the time the CPUs have to deal with
these 6 threads.
CPU1
CPU2
The policy behind this schedule is LPT (longest
processing time first)
CPU3
8Scheduling
Design Parameters concerning Scheduling
- Single-/multi-processor system
- homogenous or heterogeneous multi-processor
system, - e.g. same speed and/or same instruction set or
not - Static or dynamic set of executable units (i.e.
threads, tasks, etc.) - On-line or off-line scheduling
- Known or unknown execution times
- With or without preemption
- With or without precedence relations
- Do we have to pay regard on communication costs
or not - With or without priorities
- With or without deadlines
- Whats the objective (response time, turnaround
time, throughput, etc.)
9Scheduling
Objectives influencing Scheduling
- Length of the schedule, make span, maximal
turnaround time - Maximal response time
- Medium turnaround time
- Medium response time
- Minimal number of involved CPUs
- Throughput
- CPU Utilization
- Maximal Lateness or Tardiness
Remark Not all of these objectives can be
combined very easily in just one
policy.
10Scheduling
Single CPU Scheduling
Schedule the usage of the single processor among
all existing threads in a local system Main
objectives are High processor utilization High
throughput number of threads completed per unit
time Low turnaround time time elapsed from the
submission of a request to its termination Low
response time time elapsed from the submission of
a request to the beginning of the response
11Scheduling
Classification of CPU Scheduling
Long-term which thread to admit
Medium-term which thread to swap in or out
(activate or deactivate)
Short-term which ready thread to execute next
12Scheduling
Queuing Diagram for Scheduling
Terminate
13Scheduling
Long-Term Scheduling
Determines which tasks (either application tasks
or system tasks) are admitted to the system
Controls the degree of multiprogramming Consequ
ences if more tasks are admitted less likely that
all tasks will be blocked awaiting some
event better CPU usage (at least some times) each
task has less fraction of the CPU longer response
time The long term scheduler may attempt to keep
a mix of CPU-bound and I/O-bound tasks
14Scheduling
Medium-Term Scheduling
Swapping decisions based on the need to manage
multiprogramming Done by memory management
software (discussed intensively in chapter 8)
respectively by some specialized regulating
module see resident set allocation and load
control
15Scheduling
Short-Term or CPU Scheduling
- Determines which thread is going to be executed
next - Is the main subject of this chapter
- The short term scheduler is also known as the
dispatcher - Is invoked on an event that may lead to choose
another thread for execution - clock interrupts
- I/O interrupts
- operating system calls and traps
- signals ...
16Criteria for Short-Term Scheduling
- User-oriented
- Response Time Elapsed time from the submission
of a request to the beginning of the
first response - Turnaround Time Elapsed time from the
submission of a task to its completion - System-oriented
- processor utilization
- fairness
- throughput number of tasks completed
- per unit of time
17Scheduling
Priorities
Often implemented by having multiple ready queues
to represent each level of priority Scheduler
will always prefer threads of higher
priority Thus low-priority threads may suffer
starvation To avoid starvation a thread may
dynamically change its priority based on its age
or execution history
18Scheduling
Characterization of Scheduling Policies
- The selection function determines which thread
in the - ready queue will be assigned next
- The decision mode specifies the instants in
time at - which the selection function is executed
- Non preemptive
- Once a thread is in the running state, it will
continue - until it terminates, or yields, or blocks
itself for I/O - Preemptive
- The currently running thread may be
interrupted and moved - to the ready queue by the OS. Allows a better
service since - no thread can monopolize the processor for
very long
19Scheduling
The CPU-/I/O-Cycle
- Threads require alternate use of
- CPU and I/O in a repetitive fashion
- Each cycle consists of a CPU burst
- followed by a (usually longer) I/O burst
- A thread terminates on a CPU burst
- CPU-bound threads may have longer CPU bursts
- than I/O-bound threads
20Scheduling
Histogram of CPU-Bursts
Many short CPU bursts
frequency
160
140
120
100
80
60
40
20
burst duration (ms)
40
32
24
16
8
Remark The above histogram is taken from
Silberschatz due to old measurements, so dont
look at the absolute values of time.
21Scheduling
Example for various scheduling policies
Arrival Time
Service Time
Thread
1
0
3
2
2
6
3
4
4
4
6
5
5
8
2
Service time total processor time needed in one
(CPU-I/O) cycle Jobs tasks with long service
time are CPU-bound jobs and are referred to as
long jobs (see thread 2 above)
22Scheduling
FCFS First Come First served policy
Selection function the thread that has been
waiting the longest time in the ready queue
(hence, FCFS) Decision mode non preemptive, i.e.
a thread keeps on running until it cooperates
(e.g. yielding) or blocks itself (e.g. initiating
an I/O) or terminates
Remark In general thats the way most things are
scheduled in daily life, too. Its quite fair
and proven by evolution.
23Scheduling
Drawbacks of FCFS
A thread not performing any I/O monopolizes the
CPU FCFS implicitly favors CPU-bound
threads I/O-bound threads may have to wait until
CPU-bound threads terminate They may have to
wait even when their I/Os have completed (gt
poor device utilization) We could have kept the
I/O-devices a bit busier by giving higher
priority to I/O-bound threads
24Scheduling
LCFS
- Selection function the thread that has been
waiting - the shortest time
- in the ready queue
(hence, LCFS) - Decision mode non preemptive, i.e. a thread
keeps - on running until it
- cooperates (e.g. yielding) or
- blocks itself (e.g. initiating an I/O) or
- terminates
Remark Without preemption it is rarely used.
With preemption you may favor short
tasks which can finish before the next task is
created.
25Scheduling
Round Robin
values are processor dependent
Selection function same as FCFS Decision mode
(time) preemptive A non cooperative thread is
allowed to run until its time slice TS ends (TS
? 10, 100 ms)
Then a timer interrupt occurs, the running thread
is put onto the ready queue again
26Scheduling
Time Slice Quanta for Round Robin
TS substantially larger than the time required to
handle the clock interrupt and perform the
dispatching (not too large otherwise FCFS)
TS should be larger than execution time of
typical interaction (but not much more to avoid
penalizing I/O bound threads))
27Scheduling
Drawbacks of Round Robin
- Still favors CPU-bound threads
- An I/O bound thread doesnt use up its complete
TS, its blocked waiting for an I/O. - A CPU-bound thread executing its complete TS is
put back to the ready queue. - Thus it can overtake an I/O-bound thread still
waiting for the end of its last I/O. - Haldars solution virtual round robin
- When a I/O has completed, the blocked thread is
moved to an - auxiliary queue which gets preference over the
main ready queue.
Such a thread being dispatched from the
auxiliary queue runs no longer than the basic
time quantum minus the time it was running in
the previous TS, i.e. it gets only the unused
remainder of this TS.
28Scheduling
Queuing model of virtual Round Robin
Discuss this solution carefully!
29Scheduling
Priority Scheduling
- Selection function the ready thread with the
highest priority - Decision mode non preemptive, i.e. a thread
keeps on - running until it
- cooperates (e.g. yielding) or
- blocks itself (e.g. initiating an I/O) or
- terminates
Drawbacks Danger of starvation and priority
inversion
Remark Priority based scheduling is often done
with preemption and with dynamic
priorities.
30Scheduling
Shortest Job(Task/Thread) Next
Selection function the thread with the shortest
expected CPU burst time Decision mode non
preemptive I/O bound threads will be picked
first We need to estimate the required processing
time (CPU burst time) for each thread
31Scheduling
Estimating the required CPU burst
- Let Ti be the execution time for the ith
instance of this thread, - i.e. the actual duration of the ith CPU burst
of this thread - Let Si be the predicted value for the ith CPU
burst of - this thread. The simplest choice is
- Sn1 (1/n) Si1 to n Ti
- To avoid recalculating the entire sum we can
rewrite this as - Sn1 (1/n) Tn ((n-1)/n) Sn
- But this convex combination gives equal weight to
each instance
32Scheduling
Estimating the required CPU burst
- Recent instances are more likely to reflect
future behavior - A common technique for that is to use exponential
averaging - Sn1 a Tn (1-a) Sn 0 lt a lt 1
- more weight is put on recent instances whenever a
gt 1/n - By expanding this equation, we see that weights
of past instances are decreasing exponentially - Sn1 aTn (1-a)aTn-1 ... (1-a)iaTn-i
- ... (1-a)nS1
- predicted value of 1st instance S1 is not
calculated usually set to 0 to give priority to
new threads
33Scheduling
Exponentially decreasing averaging coefficients
34Scheduling
Use of Exponential Averaging
Here S1 0 to give high priority to new
threads. Exponential averaging tracks changes in
threads behavior much faster than simple averaging
35Scheduling
Drawbacks of SJN Policy
Possibility of starvation for longer threads as
long as there is a steady supply of shorter
threads Lack of preemption is not suited in a
time sharing environment CPU bound thread gets
lower preference (as it should), but such
a thread doing no I/O at all can monopolize the
CPU, if it is the first one to enter the
system SJN implicitly incorporates priorities
shorter jobs are given preferences The next
(preemptive) algorithm penalizes directly longer
jobs
36Scheduling
Remaining burst-time 13
Shortest Remaining Job Next
T1
T2
T3
T4
T2 arrives with RJT 2
T3 and T4 deblock with RJT3 1, RJT4 2
Selection function the thread with the shortest
expected remaining CPU burst time Decision
mode preemptive, any new or deblocked thread
with a shorter
remaining CPU burst time will preempt the
current running thread, I.e. we do not have to
wait during a long CPU burst of a CPU bound
thread.
37Scheduling
Highest Response Ratio Next
Response Ratio r (waiting-time
processing-time) / processing-time
Selection function the thread with the highest
response ratio Decision mode non
preemptive Comment Shorter jobs are favored,
however, longer jobs do not have to wait
forever, because they response ratio increases
the longer they wait.
38Scheduling
Multilevel Feedback Policy
Preemptive scheduling with dynamic
priorities Several ready to execute queues with
decreasing priorities P(RQ0) gt P(RQ1) gt ... gt
P(RQn) New threads are placed in RQ0 When they
reach the time quantum, they are placed in RQ1.
If they reach it again, they are place in RQ2...
until they reach RQn I/O-bound threads will stay
in higher priority queues. CPU-bound jobs will
drift downward. Dispatcher chooses a thread for
execution in RQi Only if RQi-1 to RQ0 are
empty Hence long CPU-bound threads may starve
39Scheduling
Multilevel Feedback Queues
Terminate
Terminate
Terminate
FCFS is used in each queue except for lowest
priority queue where Round Robin is used
40Scheduling
Time Quantum for each Multilevel Feedback Queue
With a fixed quantum time, the turnaround time of
longer threads can stretch out alarmingly To
compensate we can increase the time quantum
according to the depth of the queue Example time
quantum of RQi 2i-1 Longer threads may still
suffer starvation. Possible fix promote a
thread to higher a priority after some time
41Scheduling
Comparison of different Scheduling Policies
- Which scheduling policy is the best one?
- The answer may depend on
- system workload (extremely variable)
- hardware support for the dispatcher
- relative weighting of performance criteria
- (response time, CPU utilization, throughput...)
- The evaluation method used (each has its
limitations...) - ? ? Hence the answer depends on too many
factors - to give a concluding and satisfying answer
42Scheduling
Fair Share Scheduling
In a multi-user system, each user can run several
tasks concurrently, each one consisting of some
threads Users may belong to user groups and each
user group should have its fair share of the
CPU This is the basic philosophy of fair share
scheduling Example if there are 4 equally
important departments (groups) and one department
has more threads than the others, degradation of
response time or turnaround time should be more
pronounced for that department
43Scheduling
The Fair Share Scheduler
Has been implemented on some Unix OS Processes
(tasks) are divided into groups Group k has a
fraction Wk of the CPU The priority Pji of
process j (belonging to group k) at time interval
i is given by Pji Bj (1/2) CPUji-1
GCPUki-1/(4Wk) A high value means a low
priority Process with highest priority is
executed next Bj base priority of process
j CPUji Exponentially weighted average of
processor usage by process j in time interval
i GCPUki Exponentially weighted average
processor usage by group k in time interval i
44Scheduling
The Fair Share Scheduler
The exponentially weighted averages use a
1/2 CPUji (1/2) Uji-1 (1/2)
CPUji-1 GCPUki (1/2) GUki-1 (1/2)
GCPUki-1 where Uji processor usage by
process j in interval i GUki processor usage
by group k in interval i Recall that Pji Bj
(1/2) CPUji-1 GCPUki-1/(4Wk) The priority
decreases as the process and group use the
processor With more weight Wk, group usage
decreases less its priority
45Scheduling
Concluding Remarks on Scheduling Policies
OSes supporting interactive tasks schedule with
preemption In commercial systems they often use
a combination of - time slice mechanism (i.e.
preemption by time) and - priorities
(classifying different task classes) Often
priorities are a combination of - static part
(classifying the task type) - dynamic part
mirroring the behavior of the task and/or load of
the system
46Scheduling
Further Scheduling Algorithms?
Many publications on this battlefield There is
even some theory of scheduling (already since
1966) Some gurus are arguing You can do all
with priorities Others say Dont use
priorities at all
Remark Discuss for your own the benefits and
drawbacks of priorities
concerning appropriate performance measures!
47Scheduling
Multiprocessor and Real-Time Scheduling
Recommended Reading Bacon, J. Concurrent
Systems (6) Silberschatz, A. Operating System
Concepts (5) Stallings, W. Operating Systems
(10) Tanenbaum, A. Modern Operating Systems
(2,12) Wettstein, H. Systemarchitektur (9)
48Scheduling
Characteristics of SMPs Motivation Additional
Scheduling Parameters Multi Processor
Scheduling Real Time Scheduling Example
s
49Scheduling
Another Classification of Multiprocessors
Loosely coupled multiprocessing each processor
has its own memory and I/O channels, e.g. a
cluster of workstations Functionally specialized
processors such as I/O processor controlled by a
master processor Tightly coupled
multiprocessing processors share main memory
controlled by one operating system
50Scheduling
Variants of tightly-coupled Multiprocessors
- Asymmetric Multiprocessing
- Master/Slave Relation
- Master handles Scheduling, Interrupting etc.
- Slaves are dedicated to application tasks
- Main Drawback Master fails gt System fails
- Symmetric Multiprocessing (SMP)
- Any processor can handle any task/thread
- A thread may be executed on different processors
- during its execution
- Interrupts will be delivered to any processor
51Scheduling
Symmetric Multiprocessor
Processor 1
Processor 2
Processor 3
Processor p
L1 cache
L1 cache
L1 cache
L1 cache
...
L2 cache
L2 cache
L2 cache
L2 cache
system bus
...
Main Memory
Controller
Controller
Disk
Printer
52Scheduling
Additional Scheduling Requirements
- Scheduling Interrupts
- Scheduling Threads
- Scheduling Tasks
53Scheduling
Scheduling Interrupts
- Interrupts can be handled on any processor
- I/O-Interrupts should be handled on that
processor - having initiated the corresponding I/O-activity
- thread having initiated the I/O may be bound to
that processor - Interrupts should be handled on that processor
already - handling another interrupt
- we can save one mode switch from user/kernel
- we may postpone interrupt handling due to
interrupt convoys - Interrupts should be handled on a processor with
a low - priority activity (i.e. the idle thread)
54Scheduling
Scheduling Threads and Tasks
- Single threaded tasks
- scheduling single threaded tasks sharing code or
data onto - the same processor may reduce cache loading
time - anonymous scheduling on any processor may reduce
- turnaround times (Stallings calls this
mechanism load sharing) - Threads as members of tasks
- scheduling all threads of one task may save
cache loading times, - but also reduces concurrent execution
completely - scheduling threads of one task on as many
processors as possible - supports concurrency, but may lengthen cache
loading times - scheduling threads of one task at the same time
(gang-scheduling) - may profit from parallel execution
55Scheduling
Additional CPU-Scheduling Parameters
Suppose, you have to schedule the following
multi-threaded application on an empty,
tightly-coupled 4-processor multi-programming
system.
T0 T1
?
T2 T3
T4 T5
1. Number of processors to be involved 2.
Precedence relation 3. Communication costs
56Scheduling
Additional CPU-Scheduling Parameters
Suppose, you have to schedule the following
multi-threaded application on an empty,
tightly-coupled 4-processor multi-programming
system.
CPU 0 CPU 1 CPU 2 CPU 3
T0 T1
?
T2 T3
T4 T5
1. Scheduling parameter Number of processors to
be involved
Discuss Pros and Cons of each of the above
possibilities.
57Scheduling
Additional CPU-Scheduling Parameters
Suppose, you have to schedule the following
multi-threaded application on an empty,
tightly-coupled 4-processor multi-programming
system.
15
time
0
10
5
T0 T1
T2 T3
T4 T5
Result Theoretical schedule length 17
1. Scheduling parameter Number of processors to
be involved 1 processor (suppose CPU 0)
Pro Identical to a solution on a single
processor system Unused processors may be
reserved for other applications Con You do not
use the offered parallelism of the hardware
thus your turnaround time is high.
58Scheduling
Additional CPU-Scheduling Parameters
Suppose, you have to schedule the following
multi-threaded application on an empty,
tightly-coupled 4-processor multi-programming
system.
15
time
0
10
5
T0 T1
T2 T3
T4 T5
Result Theoretical schedule length 9
1. Scheduling parameter Number of processors to
be involved some processors (suppose CPU 0 and
CPU 1)
Pro Theoretically smaller maximal turnaround
time due to the parallel execution of Tj Con Due
to critical sections within these Tj the
individual turnaround times may be larger
59Scheduling
Additional CPU-Scheduling Parameters
Busy waiting
15
time
0
10
5
T0 T1
T2 T3
T4 T5
Busy waiting
Result Practical schedule length 12
1. Scheduling parameter Number of processors to
be involved
some processors (suppose CPU 0 and CPU 1) Can
you imagine further constraints leading to a
longer schedule?
60Scheduling
Additional CPU-Scheduling Parameters
15
time
0
10
5
T0 T1
T2 T3
T4 T5
Result Theoretical schedule length 5
1. Scheduling parameter Number of processors to
be involved
All processors Pro Theoretically shortest
maximal turnaround time due to the parallel
execution of Tj Con Due to critical sections
within these Tj the individual turnaround times
may be larger
61Scheduling
Additional CPU-Scheduling Parameters
CPU 0 CPU 1 CPU 2 CPU 3
T0 T1
?
T2 T3
T4 T5
2. Scheduling parameter precedence constraints,
i.e. a certain Ti has to be be finished
before Tj may start to execute.
The arrows indicate the precedence relation
62Scheduling
Additional CPU-Scheduling Parameters
15
time
0
10
5
T0 T1
T2 T3
T4 T5
Result Theoretical schedule length 13
2. Scheduling parameter precedence constraints,
i.e. a certain Ti has to be finished before
Tj may start to execute.
The arrows indicate the precedence relation
63Scheduling
Additional CPU-Scheduling Parameters
T0 T1
T2 T3
T4 T5
3. Scheduling parameterCommunication costs
between threads Communication between threads on
different processors has to be done via main
memory. Communication between threads on the same
processor could be done via caches or
registers. Conclusion What you might gain via
concurrency you could loose due to communication.
64Scheduling
Synchronization Granularity
65Scheduling
Independent Parallelism
Separate independent tasks are running No
explicit synchronization between tasks Example
is time sharing User does some word processing
etc. Average response time and turnaround time
improve
Result Scheduling the above independent tasks on
all processors using some load-balancing scheme
may be a good idea.
66Scheduling
(Very) Coarse grained Parallelism
Little synchronization respectively communication
among tasks is needed The speedup may exceed
what would be expected from simply adding the
number of processors due to synergies in disk
buffers and sharing of code
67Scheduling
Medium grained Parallelism
Parallel processing or multithreading within a
single application Single application is a
collection of threads Threads usually interact
frequently
Result Due to these frequent interactions
between threads of an application you have to be
sure that involving more processors really
improves the execution of an application.
68Scheduling
Scheduling of Multithreaded Tasks
Anonymous scheduling Any thread executes
separately from the rest of the task An
application can be a set of threads that
cooperate and execute concurrently in the same
address space Threads running on separate
processors may yield a dramatic gain in
performance
69Scheduling
Scheduling Threads
- Dedicated scheduling
- threads of the same application are assigned to
a specific - processor
- Dynamic scheduling
- threads may be assigned to any processor during
execution - number of threads can be altered during course
of execution
70Scheduling
Scheduling Threads
- Load is distributed evenly across the processors
- Assures that no processor is idle
- No centralized scheduler is required
- You can use global queues
- However, a central queue needs mutual exclusion
- Central queue may be a bottleneck when more than
one processor - looks for work at the same time
- Preemptive threads are unlikely to resume
execution on the - same processor, thus cache use is less
efficient - If all threads are in the global queue, all
threads of a task hardly will - gain access to the processors at the same time
71Scheduling
Gang Scheduling
- Simultaneous scheduling of threads that make up
a task - Useful for applications where performance
severely - degrades when any part of the application is
not running - Threads often need to synchronize with each other
72Scheduling
Dedicated Processor Assignment
- When an application task is scheduled,
- all its threads are assigned to one processor
- If dispatching takes place within the task
- you can avoid some switching overhead
- However, some processors may be idle
- whilst others are overcrowded
73Scheduling
Mapping of dedicated and anonymous threads
CPU 1
?
CPU 2
CPU 3
Anonymous threads may be assigned onto any
available processor
Problem Find an efficient data structure for the
ready queue!
74Scheduling
One common randomly ordered ready queue
CPU 1
TA,1
T1,5
T1,4
T1,3
T1,2
T1,1
T2,4
T3,3
T2,3
T2,2
T2,1
T3,2
T3,1
TA,2
TA,3
CPU 2
Policy Assign the first fitting thread in the
ready queue Drawbacks 1. You do not assign the
head of the ready queue, thus there is some
additional overhead for looking up 2. You may
assign an anonymous thread to CPUx, even
though there is a dedicated thread Tx for CPUx.
Thus you may get idle on one of the other CPUs
next!
CPU 3
75Scheduling
One common randomly ordered ready queue
CPU 1
TA,1
T1,5
T1,4
T1,3
T1,2
T1,1
T2,4
T3,3
T2,3
T2,2
T2,1
T3,2
T3,1
TA,2
TA,3
CPU 2
Policy Assign the best fitting thread in the
ready queue Drawback You may have to look up
the complete ready queue
CPU 3
76Scheduling
1 anonymous and m(3) dedicated ready queues
T1,5
T1,4
T1,3
T1,2
T1,1
CPU 1
dedicated
anonymous
TA,1
TA,2
TA,3
T2,4
T2,3
T2,2
T2,1
CPU 2
CPU 3
T3,3
T3,2
T3,1
Policy Prefer dedicated threads First look up
in the appropriated dedicated queue
then look up in the anonymous queue Drawback
High priority anonymous threads may suffer from
low priority dedicated threads
77Scheduling
1 anonymous m dedicated ready queues
T1,5
T1,4
T1,3
T1,2
T1,1
CPU 1
dedicated
anonymous
TA,1
TA,2
TA,3
T2,4
T2,3
T2,2
T2,1
CPU 2
CPU 3
T3,3
T3,2
T3,1
Policy Strictly prefer threads with higher
priority Compare the head of the appropriate
dedicated queue with the head of the anonymous
queue Pick the one with the higher priority
78Scheduling
Dynamic Scheduling
- Number of threads in a task are altered
dynamically by the application -
- Operating system adjust the load to improve
processor utilization - assign idle processors
- new arrivals may be assigned to a processor that
is used - by a job currently using more than one
processor - hold request until processor is available
- new arrivals will be given a processor
- before existing running applications are assigned
79Scheduling
Scheduling Tasks
- Single threaded tasks
- Fair Share Scheduling between tasks is
straightforward, OR - Prefer tasks with partly shared address space
(swap in/out) - Multi threaded task
- Fair Share Scheduling on task or thread basis,
- (the last one favors advanced programming)
- Swap in/out complete task
Remark The VAX VMS supports another scheduling
unit session which is directly related to a
user, thus you can establish fair share
scheduling on session basis.
80Scheduling
Real Time Scheduling
- Correctness of the system depends not only
- on the logical result of the computation but
also - on the time at which the results are produced
- Tasks attempt to control events or to react on
events - that take place in the outside world
- These events occur in real time and processing
- must be able to keep up with them
- Processing must happen timely,
- neither be too late, nor too early.
81Scheduling
Real Time System (Definition)
- RT system accepts an activity A and guarantees
its - requested (timely) behavior B if and only if
- RT system finds a schedule that
- that includes all already accepted activities Ai
and - the new activity A, and
- that guarantees all requested timely behaviors
Bi - and B, and
- that can be enforced by the RT system.
- Otherwise, RT system rejects activity A.
82Scheduling
Typical Real Time Systems
- Control of laboratory experiments
- Process control in factories
- Robotics
- (Air) Traffic control
- Cars / Trains/ Planes
- Telecommunications
- Remote Surgery
- Multi-Media
Remark Some of the above applications have so
called soft-real time requirements, some have
hard real-time requirements
83Scheduling
Hard-Real Time Systems
Requirements Must always meet all deadlines
(time guarantees) You have to guarantee that in
any case these applications are done in time,
otherwise dangerous things may happen Examples
1. If the automatic landing of a jet cannot
react to sudden side-winds within some ms a
crash might occur. 2. An airbag system or the
ABS has to react within some ms 3. Remote scalpel
in a surgical operation must follow
immediately all movements of the surgeon
84Scheduling
Soft-Real Time Systems
- Requirements
- Must mostly meet all deadlines, e.g. in 99.9
- Examples
- Multi-media 100 frames per day might be dropped
(late) - Car navigation 5 late announcements per week
are - acceptable
- Washing machine washing 10 sec over time might
- occur once in 10 runs, 50 sec once in 100 runs.
85Scheduling
Characteristics of Real Time Systems
- Some deterministic behavior
- operations are performed at fixed, predetermined
times - or within predetermined time intervals
(periodically) - concerned with how long the operating system
delays - before acknowledging an interrupt
- User control
- specify paging (e.g. pinning)
- what tasks must always reside in main memory
- rights of tasks/threads
- Reliability
- degradation of performance may have catastrophic
consequences - most critical, high priority tasks must execute
86Scheduling
Features Not Characterizing an RTOS
- Very fast context switch (should be fast anyway)
- Overall size pretty small (an OS should be small
anyway) - Ability to respond to external interrupts
quickly -
- Multitasking with inter process communication
(IPC) tools - such as semaphores, signals, and events
- Files that accumulate data at a fast rate
87Scheduling
Typical Properties of an RTOS
- preemptive scheduling
- (immediate preemption allows operating system
- to respond to an interrupt quickly)
- minimal disable-interrupt periods
- precise wakeup times
- RT-scheduling
88Scheduling
Real Time Scheduling
- Static table-driven
- determines at runtime when a task begins
execution - Static priority-driven preemptive
- traditional priority-driven scheduler is used
- Dynamic planning-based
- Not RT best effort
89Scheduling
Real Time Scheduling Events
A real-time task (thread) Ti have to be executed
within ai, di
A real-time task being to late may be tolerated
within soft-real time systems, however in
hard-real time systems this may have severe
consequences!
90Scheduling
Deadline Scheduling
- Real-time applications are not concerned
- with speed but with completing tasks
- Scheduling tasks with the earliest deadline
policy - minimizes the fraction of tasks that miss their
deadlines - includes new tasks and amount of time needed
for existing tasks
Jacksons rule (EDD-Policy) Any schedule
ordering the threads according to non decreasing
deadlines is optimal with respect to maximal
lateness.
91Scheduling
Scheduling of Real-Time Tasks
0
10
20
30
40
50
60
70
80
90
100
110
120
A
B
D
E
C
Arrival times
Requirements
Starting deadline
A
D
B
C
E
92Scheduling
Scheduling of Real-Time Tasks
0
10
20
30
40
50
60
70
80
90
100
110
120
93Scheduling
Refinement of EDD
Add to the earliest deadline first rule a
preemption mechanism, then you are minimizing
maximum lateness.
94Scheduling
Periodic Scheduling
Many real time applications deal with periodical
tasks, i.e. a task is characterized by pi, its
expected period.
For any periodical task you have to perform a so
called feasibility test, i.e. whether you can
schedule this task in time or not, thus the
following must hold 0 lt bi lt di lt pi.
95Scheduling
Periodic Scheduling
Theorem If there are n periodic real-time
threads Ti, then there is a feasible schedule
iff
One common used scheduling for periodical tasks
is called rate monotonic scheduling favoring
those tasks with smaller periods.
Remark Rate monotonic schedules use the
processor up to about 69, only.
96Scheduling
Rate Monotonic Scheduling
Assumptions (1) Task Ti is periodical with with
period-length pi (2) Deadline di pi (3) Ti is
ready again immediately after pi (4) Ti has a
constant execution time bi (lt pi) (5) The
smaller the period the higher the priority
Example T T1, T2, T3, p4, 6, 8, b 1, 2,
2
Just use the above priority schema!
How to schedule on 1 CPU?
97Scheduling
Rate Monotonic Scheduling
Assumptions (1) Task Ti is periodical with with
period-length pi (2) Deadline di pi (3) Ti is
ready again immediately after pi (4) Ti has a
constant execution time bi (lt pi) (5) The
smaller the period the higher the priority
Hyperperiod
Example T T1, T2, T3, p4, 6, 8, b 1, 2,
2
T1
T2
T2
98Scheduling
Result of Rate Monotonic Scheduling Example
We got a feasible schedule, because the
feasibility criterion is met (1/4 2/6
2/8) 20/24 lt 1
The processor utilization is (24 - 4)/24
20/24 ? 83 .
Further topics on real-time and non real-time
scheduling see lecture Claude Hamann
(Uni-Dresden) ST 2002 Scheduling
99Scheduling
UNIX SVR4 Scheduling
Set of 160 priority levels divided into three
priority classes Because basic kernel is not
preemptive some spots called preemption
points have been added allowing better reaction
times for real-time applications
A dispatching queue per priority is
implemented, processes on the same priority level
are executed in RR. Real-time processes have
fixed priorities and fixed time slices,
time-shared processes have dynamic priorities
and varying time slices reaching from 10, 100
ms..
100Scheduling
Windows NT Priority
Windows supports fixed priorities 16,31 for
real-time applications. Time-shared applications
may change their priorities within
0,15 according to their behavior concerning
I/O-bursts and CPU-bursts.
If ngt2 processors are available, (n-1) are busy
with the (n-1) highest priority threads whereas
the remaining processor executes all
remaining ready threads. You have the
possibility to pin a task or its threads to
specific processors.