SCHEDULING - PowerPoint PPT Presentation

1 / 36
About This Presentation
Title:

SCHEDULING

Description:

On CPU allocation, state of the thread changes from ready to running state. ... Long running threads may starve. NOTE: It requires prior knowledge of service time ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 37
Provided by: niting
Category:

less

Transcript and Presenter's Notes

Title: SCHEDULING


1
SCHEDULING
  • Kshama Desai
  • Bijal Shah
  • Kishore Putta
  • Kashyap Sheth

2
SCHEDULING
  • Scheduling manages CPU allocation among a group
    of ready processes and threads.
  • Concept Of Scheduling
  • Based on mechanism of context switching.
  • Requirement of shared resources by processes
  • and threads.

3
Multiprogramming
  • Allows many processes to load and
    time-multiplexing of the threads.
  • Per CPU, a single thread executes at any given
    time.
  • Need for threads to perform concurrent I/O
    operations.
  • High CPU multiplexing rate gives the impression
    of concurrent execution of processes / threads.

4
Scheduling Policies and Mechanism
  • Scheduling Policy determines the thread to which
    the CPU should be allocated and its time of
    execution.
  • Scheduling mechanisms determine the way process
    manager multiplexes the CPU and the state of the
    thread.
  • Thread Scheduling
  • Scheduler determines the transition of thread
    from the running state to the ready state.

5
  • Steps in Thread Scheduling
  • Wait in Ready list for CPU allocation.
  • On CPU allocation, state of the thread changes
    from ready to running state.
  • During execution, thread waits in resource
    managers pool on subsequent request for an
    unavailable resource (if needed).
  • After complete execution, it leaves the CPU, or
    returns to ready state.

6
  • Simpler Processor Scheduling Model

Preemption / Yield
Done
New Thread
Ready List
Scheduler
CPU
Job
Job
Ready
Job
Running
Resource Manager
Request
Allocate
Job
Job
Blocked
Resources
7
  • Running Thread Ceases CPU use for following
    reasons
  • Thread completes execution and leaves the system.
  • Thread requests an unavailable resource. State is
    changed to Blocked. On availability of resource,
    state is changed to Ready state.
  • Thread voluntarily decides to release CPU
  • System Preempts the thread by changing its state
    from Running to Ready

8
Scheduling Mechanisms
  • Scheduler is implemented in the hardware as well
    as in
  • the OS software.
  • Scheduler implements 3 mechanisms
  • a) Enqueuer b) Context Switching c) Dispatch
    element .
  • Sequence of Schedule Mechanism
  • Enqueuer mechanism places the process into Ready
    state and decides its priority.
  • Context switch element helps to remove a process
    from CPU and bring in a new process.
  • Dispatch element allocates CPU to the new
    incoming process.

9
CPU SHARING
  • Two types of Schedulers
  • Voluntary CPU sharing (Non-preemptive scheduler)
  • Involuntary CPU sharing (Preemptive scheduler)
  • Voluntary CPU Sharing
  • Hardware includes special yield machine
    instruction.
  • Address of the next instruction is saved in a
    designated memory location.

10
  • Disadvantage of yield instruction
  • Failure of periodic access of a process to yield
  • instruction blocks other processes from
  • executing CPU, until the process exits or
  • requests a resource.
  • Solution of this problem is if the system gives
    self-interrupt.

11
  • Involuntary CPU sharing
  • Incorporates interval timer device.
  • Interval of time is decided by the system
    programmer.
  • Internal timer invokes the scheduler periodically
  • Advantage
  • Process executing in an infinite loop cannot
    block
  • other processes from running (executing ) the
  • CPU.

12
Performance
  • Scheduler affects the performance of a
    multiprogrammed CPU to a great extent.
  • Imbalanced process selection for CPU execution by
    a scheduler will lead to Starvation. This leads
    to the need for strategy selection of schedulers

13
Strategy Selection
  • Criteria to select a scheduling strategy depends
    on the goals of the OS.
  • Scheduling algorithms for modern OS use internal
    priorities.
  • Involuntary CPU sharing have time quantum /
    timeslice length. Optimal schedule can be
    computed, provided there is no new entry in the
    ready list while prior present processes are
    being served.

14
Model For Scheduling
  • Performance metrics to compare scheduling
    strategies
  • Service Time
  • Wait time
  • Turnaround time.
  • Process model and the metrics are used to compare
    the performance characteristics of each
    algorithm.
  • The general model must fit each specific class of
    the OS
  • environment.
  • Turnaround time is the most critical performance
    metric in batch multiprogrammed system.
  • Time sharing systems focus on single phase of
    thread execution.

15
  • Simpler Processor Scheduling Model

Preemption / Yield
Done
New Thread
Ready List
Scheduler
CPU
Job
Job
Ready
Job
Running
Resource Manager
Request
Allocate
Job
Job
Blocked
Resources
16
Types of Scheduling Methods
  • Non- Pre-emptive
  • Pre-emptive

17
Non-Preemptive
  • Non-Pre-emptive
  • Once a process enters the running state, it is
    not removed from the processor until it has
    completed its service time.
  • There are 4 Non-Pre-emptive strategies
  • First Come First Serve (FCFS)
  • Shortest Job Next (SJN)
  • Priority Scheduling
  • Deadline Scheduling

18
Pre-emptive
  • Based on prioritize computation.
  • Process with highest priority should always be
    the one using the CPU.
  • If a process is currently using the CPU and a new
    processor with higher priority enters the ready
    list, the process on the processor should be
    removed and returned to the ready list until it
    is once again the highest priority process in the
    system.

19
First Come First Serve (FCFS)
  • Processes are assigned the CPU in the order they
    request it.
  • If a running process blocks, the first process on
    the queue is run the next. When this blocked
    process becomes ready, like a newly arrived job,
    it is put on the end of the queue.

20
FCFS Example
  • Example load

21
FCFS Example
  • Turnaround Time
  • Average of finishing times
  • TTRnd (P0) t(P0) 350
  • TTRnd (P1) t(P1) TTRnd (P0) 125 350
    475
  • TTRnd (P2) t(P2) TTRnd (P1) 475 475
    950
  • TTRnd (P3) t(P3) TTRnd (P2) 250 950
    1200
  • TTRnd (P4) t(P4) TTRnd (P3) 75 950
    1275
  • Average turnaround time
  • TTRnd (350 475 950 1200 1275)/5
  • 4250/5
  • 850

22
FCFS Example
  • Wait Time
  • Average time in wait before first run
  • From the Gantt chart
  • W(P0) 0
  • W(P1) TTRnd (P0) 350
  • W(P2) TTRnd (P1) 475
  • W(P3) TTRnd (P2) 950
  • W(P4) TTRnd (P3) 1200
  • Average wait time
  • W (0 350 475 950 1200)/5 2975/5
    595

23
Advantages and Disadvantages of FCFS
  • Advantages
  • Easy to understand and easy to program
  • It is fair
  • Requires only single linked list to keep track of
    all ready processes
  • Disadvantages
  • Does not perform well in real systems
  • Ignores the service time request and all other
    criteria that may influence the performance with
    respect to the turnaround or waiting time.

24
Shortest Job Next (SJN)
  • Here each process is associated with its length.
  • The ready queue is maintained in the order of
    increasing job lengths.
  • When current process is done, pick the one at the
    head of the queue and run it.

25
SJN Example
  • Example load

Average turnaround time TTRnd (800 200
1275 450 75)/5 2800/5 560 Average wait
time W (450 74 800 200 0)/5 1525/5
305
26
Advantages and Disadvantages of SJN
  • Advantages
  • Minimizes wait time
  • Disadvantages
  • Long running threads may starve
  • NOTE
  • It requires prior knowledge of service time
  • Optimal only when all the jobs are available
    simultaneously.

27
Priority Scheduling
  • Each process is assigned a priority and the
    runnable process with highest priority is allowed
    to run.
  • Priorities can be assigned statically or
    dynamically
  • With static priority, starvation is possible
  • Dynamic (internal) priority solves the problem of
    starvation

28
Priority Scheduling
  • Example load

Average turnaround time TTRnd (1275 375
850 250 925)/5 3675/5 735 Average wait
time W (925 250 375 0 850)/5 2400/5
480
29
Deadline Scheduling
  • For hard real-time systems, process must finish
    by a certain time
  • Turnaround time and wait times are irrelevant
  • We need to know maximum service time for each
    process
  • Deadline must be met for each period in a
    processs life

30
Deadline Scheduling Example
  • Example load

0 75 200
550
1025 1275
P4
P0
P2
P3
P1
31
Round Robin
  • Most widely used scheduling algorithm
  • It tries to be fair by equally distributing the
    processing time among all the processes
  • When the process uses up its quantum, it is put
    on the end of the ready list

32
How to choose the length of the quantum?
  • Setting the quantum length too short causes many
    process switches and lowers the CPU efficiency
  • Setting the quantum length too long may cause
    poor response to short interactive requests
  • Solution
  • Around 20-50 msec is a reasonable compromise.

33
Multiple-Level Queues
  • It is an extension of priority scheduling
  • The ready queue is partitioned into separate
    queues
  • Foreground
  • Background
  • Scheduling must be done between the queues
  • It uses 2 strategies of scheduling
  • One to select the queue
  • Another to select the process in the queue such
    as Round Robin

34
LINUX Scheduling Mechanism
  • Linux threads are kernel threads hence scheduling
    is based on threads.
  • It is based on time-sharing techniques.
  • Each thread has a scheduling priority
  • Default value is 20, but can be altered using the
    nice (value) system call to a value of
    20-value
  • Value must be in the range -20 to 19, hence the
    range of priority is between 1 and 40
  • Quality of service is proportional to priority.
  • The scheduler keeps track of what processes are
    doing and adjust the priorities periodically,
    i.e., the priorities are dynamic

35
Windows NT Scheduling Mechanism
  • Windows NT is a pre-emptive multithreading OS.
  • Even here the unit of scheduling is thread.
  • It uses 32 numerical thread priorities from 1 to
    31 (0 being reserved for system use).
  • 16 to 31 for use by time critical operations
  • 1 to 15 (dynamic priorities) for program threads
    of typical applications.

36
References
  • Nutt, Gary. Operating Systems. Third Edition,
    Pearson Education Inc, 2004.
  • Tanenbaum, Andrew. Modern Operating Systems,
    Prentice-Hall Of India Pvt. Ltd.
  • http//www.windowsitpro.com/Article/ArticleID/302/
    302.html
Write a Comment
User Comments (0)
About PowerShow.com