Title: Scheduling Silber Chap 6
1Scheduling (Silber Chap 6)
2CPU (Short-Term) Scheduling
- Ready Queue The list of runnable processes.
- Short-term scheduling.
- Selecting process from
- ready queue.
- CPU runs selected process
- until process blocks or is preempted (TQ
expires). - Scheduler then chooses another process to run,
and so on. - The algorithm that chooses which process to run
next is termed the short-term scheduling policy
or discipline. - Some policies are preemptive, the CPU may switch
processes even when the current process isn't
blocked
3Basic Scheduling Concepts
- Max CPU utilization is obtained with
multiprogramming - Analysis of typical process execution shows that
the average process consists of a CPU-I/O cycle - CPU-I/O cycle consists of a "burst" of CPU
activity (execution) followed by a wait for I/O. - Cycle continues until
end of the process,
generally completed
with a final
CPU burst. - A typical CPU burst
distribution can be
determined.
4CPU Scheduler
- Selects from among the processes in memory that
are ready to execute, and allocates the CPU to
one of them. - CPU scheduling decisions may take place when a
process - 1. switches from running to
waiting (I/O req, wait for
child) - 2. switches from running to ready
state (time quantum
expires) - 3. switches from waiting to ready (I/O complete)
- 4. terminates.
- Which of the above can take advantage of
preemption?
5Scheduling Criteria
- Several metrics are used to evaluate scheduling
algorithms - CPU utilization Keep the CPU as busy as
possible - Throughput of processes that complete per
time unit - Both of these measures depend not only on the
scheduling algorithm, but also on the offered
load. - If load is very light--jobs arrive only
infrequently-- what happens to throughput and
utilization?
6Measuring whether a Job gets good service
- Each job wants good service (starts quickly, runs
quickly, and finishes quickly). Several more
metrics apply - Turnaround time interval from arrival to
completion. - Waiting time time process spends waiting in
ready queue - Response time time between arrival until starts
to produce output. More important than
throughput for interactive sys. - Which of the above should be
maximized/minimized?
7Scheduling Algorithm's view of a Process
- From a short-term scheduler perspective, one
useful way to look at a process is as a sequence
of CPU bursts exclusive of I/O. - Each burst is the computation done by a process
between the time it is given the CPU and the next
time it blocks. - To the short-term scheduler, each individual CPU
burst of a process looks like a tiny job. - We will assume that a process completes its I/O
and is ready to run prior to the next time it
participates in the CPU allocation selection
process.
8Dispatching
- Dispatcher module gives control of the CPU to the
process selected by the short-term scheduler
this involves - switching context
- switching to user mode
- jumping to the proper
location in the user
program to restart
that
program - Dispatch latency time
it takes
for the dispatcher
to stop one
process and
start another running.
9First-Come First-Served Scheduling
(non-preemptive)
- The simplest scheduling discipline is called
First-come, first-served (FCFS). - The ready list is a simple queue
(first-in/first-out). - The scheduler simply runs the first job on the
queue until it blocks, then it runs the new first
job (the job waiting the longest), and so on, - much like the checkout line in a grocery store.
- When a job becomes ready, it is added to the end
of the queue.
10First-Come, First-Served (FCFS) Scheduling Example
- Process Burst Time
- P1 24
- P2 3
- P3 3
- Suppose processes arrive at time 0 in the order
P1 , P2 , P3 The Gantt Chart for the schedule
is - Waiting time for P1 0 P2 24 P3 27
- Average waiting time (0 24 27)/3 17
11ICE FCFS Scheduling
- ICE Compute average waiting time for below using
FCFS Scheduling.
12Does FCFS play favorites?
- A Scheduling Algorithm can favor jobs by giving
more CPU time to one class of jobs over another. - Favoring long bursts means favoring CPU-bound
processes (which have very long CPU bursts
between I/O). - Favoring short bursts means favoring I/O-bound
processes. - What class of jobs would we prefer that a
scheduling algorithm favor?
13Convoy Effect Consider one CPU-hog and several
I/O-bound processes
- Suppose start out on the right foot and run the
I/O-bound jobs 1st - They quickly finish bursts, start I/O, so we run
the CPU-hog. - After a while, they finish their I/O and queue up
behind the CPU-hog, leaving all the I/O devices
idle. - When the CPU-hog finishes its burst, it will
start I/O, allowing us to run the other jobs. - As before, they quickly finish their bursts and
start I/O. - Now CPU sits idle, while all the processes are
doing I/O.
14FCFS Scheduling (Convoy Effect)
- Suppose that original processes had arrived at t0
in the order - P2 , P3 , P1 .
- The Gantt chart for the schedule is
- Waiting time for P1 6 P2 0 P3 3
- Average waiting time (6 0 3)/3 3
- How does this compare w/previous arrival order P1
, P2 , P3?
Process Burst Time P1 24 P2
3 P3 3
15Shortest-Job-First (SJF) Scheduling
- Associate with each process the length of its
next CPU burst. Use lengths to schedule process
with shortest time. - Two schemes
- Non-preemptive once CPU given to the process it
cannot be preempted until completes its CPU
burst. - Preemptive if a new process arrives with CPU
burst length less than remaining time of
currently executing process, preempt. This
scheme is known as the Shortest-Remaining-Time-Fi
rst (SRTF). - SJF is optimal can prove that it gives minimum
average waiting time for a given set of
processes.
16Example of Non-Preemptive SJF
- Process Arrival Time Burst Time
- P1 0 7
- P2 2 4
- P3 4 1
- P4 5 4
- SJF (non-preemptive)
- Average waiting time (0 6 3 7)/4 4
17Preemptive SJF Shortest-Remaining-Time-First
(SRTF).
- If a process arrives with CPU burst length less
than remaining time of current executing process,
preempt the current process. - Whenever a job enters the ready queue, the
algorithm reconsiders which job to run. - If the arrival has a burst shorter than the
remaining portion of the current burst, - the scheduler moves the current job back to the
ready queue (to the appropriate position
considering the remaining time in its burst) and - runs the new arrival instead.
18Example of Preemptive SJF
- Process Arrival Time Burst Time
- P1 0 7
- P2 2 4
- P3 4 1
- P4 5 4
- SJF (preemptive)
- Average waiting time (9 1 0 2)/4 3
19Optimality of SJF
- SJF Advantage Provable that SJF gives the
minimum average waiting time for any set of jobs. - Proof Moving short jobs before a long one
decreases the waiting time of the short job more
than it increases the waiting time of the long
job ... therefore, average waiting time decreases.
20ICE SJF Scheduling
- Compute average waiting time for below using
- Non-preemptive SJF.
- Preemptive SJF, aka Shortest-Remaining-Time-First
(SRTF).
21Implementing SJF or SRTF
- Problem with SJF (or SRTF) Don't know exactly
how long a burst is going to be! - Luckily, we can make a pretty good guess.
- Processes tend to be cyclic, if one burst of a
process is long, there's a good chance the next
burst will be long too. - We can guess that each burst would be the same
length as the previous burst of the same process.
- However, what if a process has an occasional
oddball burst that is an unusually long or short
burst? - Not only will we get that burst wrong, we will
guess wrong on the next burst, which is more
typical for the process.
22Estimating next burst for SJN/SRTF
- Make each guess the average of the length of the
immediately preceding burst and the guess we used
before the previous burst. - This strategy takes into account the entire past
history of a process in guessing the next burst
length, - quickly adapts to changes in the behavior of the
process, - the weight of each burst in computing the guess
drops off exponentially with the time since that
burst.
Define
Where
Weight put on prior actual burst
23 Burst Prediction Example
a 1/2
24Priority Scheduling
- A priority number (integer) is associated with
each process - The CPU is allocated to the process with the
highest priority (smallest integer ? highest
priority). - Preemptive
- nonpreemptive
25Round Robin (RR) Scheduling
- Each process gets a small unit of CPU time called
a time quantum (TQ), usually 10100 milliseconds.
After this time elapses, process is preempted and
added to the end of the ready queue.
Process Burst P1 53 P2 17 P3
68 P4 24
- With TQ 20, the Gantt chart is
- Typically, higher average turnaround than SJF,
but better response.
26Round Robin Considerations
- If n processes are in the ready queue then each
process gets 1/n of the CPU time in amounts of no
more than size TQ.
27How big should RR Time Quantum be?
- RR Performance related to size of q
- TQ Too Large gt
- TQ Too Smallgt
- Author gt
28ICE RR Scheduling
- Compute average waiting time for below using
Round Robin Scheduling with a - TQ of 1, a
- TQ of 3, and a
- TQ of 5
29Multilevel Queue
- Ready queue partitioned into separate queues
- foreground (interactive)
- background (batch)
- Each queue has its own scheduling algorithm,
- foreground RR why?
- background FCFS why?
- Scheduling must be done between the queues.
- Fixed priority scheduling (i.e., serve all from
foreground then from background). Possibility of
starvation. - Time slice each queue gets a certain amount of
CPU time which it can schedule amongst its
processes i.e., - 80 to foreground in RR
- 20 to background in FCFS
30Multilevel Feedback Queues
- Groups processes with similar
burst characteristics,
allow jobs
to move between queues - Several queues at different
levels. Normally the lower
the
priority of the queue, the
larger the time quantum. - A new process starts on the highest priority
queue. If it uses up a time slice (CPU bound),
it goes to the next lower queue. If it is
interrupted, it stays at the same level.
(Feedback) - Allows processes waiting a long time at a lower
level to have their priority increased to prevent
starvation. Any Favoritism?
31Thread Scheduling Java Threads
- JVM schedules threads using a preemptive,
priority-based algorithm - All threads have a priority (low 1..10, default
to 5) - JVM schedules (allows to run) the Ready state
thread with the highest priority - Two threads with the same priority will be run
FIFO - Threads are scheduled when
- the currently running thread exits the Running
state (block for I/O, finish run(), stop(),
suspend()), or - a thread with a higher priority than currently
running state enters the Ready state (current
thread preempted)
32Time Slicing Java Threads
- JVM takes CPU from a thread when
- time slice expires (if time slicing used),
- exits the Running state, or
- higher-priority thread arrives
- JVM does not specify whether or not threads are
time-sliced - Timeslicing is dependant on the implementation of
the JVM and is dependent on the underlying
architecture - Solaris no-time slicing
- Windows 95,98,NT, XP time slicing used
- The JVM also runs low priority garbage collection
and keyboard/mouse input threads.
33Thread Priorities
- Threads can have their priorities set to any
level from 1..10 - 1 is the lowest priority, and
- a new thread defaults to level 5
- can be changed explicitly
- aThread.setPriority(7) -- or --
- aThread.setPriority(aThread.getPriority()1)
- Threads can voluntarily release the CPU via
yield() - the thread effectively gives the CPU to another
thread with equal priority - if no equal priority thread exists, yield allows
scheduling of the CPU to next lowest priority
thread - Since can't count on timeslicing, use yield() to
force a thread to share
34Class TestScheduler
- public class TestScheduler
- public static void main(String args)
- Thread.currentThread().setPriority(Thread.MAX_P
RIORITY) - Scheduler CPUScheduler new Scheduler()
- CPUScheduler.start()
- TestThread t1 new TestThread("Thread 1")
- t1.start()
- CPUScheduler.addThread(t1)
- TestThread t2 new TestThread("Thread 2")
- t2.start()
- CPUScheduler.addThread(t2)
-
- TestThread t3 new TestThread("Thread 3")
- t3.start()
- CPUScheduler.addThread(t3) // end class
TestScheduler
35Class Scheduler
- public class Scheduler extends Thread
- private CircularList queue
- private int timeSlice
- private static final int DEFAULT_TIME_SLICE
1000//1 sec -
- public Scheduler()
- timeSlice DEFAULT_TIME_SLICE
- queue new CircularList()
-
- public Scheduler(int quantum)
- timeSlice quantum
- queue new CircularList()
-
- public void addThread(Thread t)
- queue.addItem(t)
- private void schedulerSleep()
- try Thread.sleep(timeSlice) catch
(InterruptedException e)
- public void run()
- Thread current
- // set priority of the scheduler
- this.setPriority(6)
- while (true)
- try current
- (Thread)queue.getNext()
- if ( (current ! null)
- (current.isAlive()) )
- current.setPriority(4)
- schedulerSleep()
- S.o.println("Context Switch")
- current.setPriority(2)
- // end if
- catch (NullPointerException
- npe)
- // while
- // run
- // class Scheduler
36Scheduling Algorithm Evaluation
- Deterministic Modeling use algorithms and
"canned" workload - plug and chug on algorithm choices based on this
specific workload and see which is the best - (this is what we have been doing with Gantt
Charts) - What are the pros and cons of deterministic
modeling?
37Evaluation via Simulation or Implementation
- Simulation evaluation - model an algorithm using
software - use random number generator (and historic
probability distributions) to generate arriving
processes, burst times - develop a "trace tape" of what system state
progression looks like - run same "trace tape" on your scheduling
algorithms, gathering statistics that show
algorithm performance - Implementation evaluation - code up OS and put
into use - costly, but gives "real use" performance results
- most OS's tunable - allow you to change
scheduling options such as increase priority of
paycheck printer on payday