Operating System - PowerPoint PPT Presentation

1 / 37
About This Presentation
Title:

Operating System

Description:

http://www.cs.tcd.ie/Jonathan.Dukes. Process Execution. The ready queue of various I/O devices ... disk I/O = small boost, keyboard/mouse = big boost, sound ... – PowerPoint PPT presentation

Number of Views:74
Avg rating:3.0/5.0
Slides: 38
Provided by: csT1
Category:
Tags: and | big | dukes | operating | system

less

Transcript and Presenter's Notes

Title: Operating System


1
Operating System
  • CPU Scheduling

2
Multiprogramming
  • Only one process can execute at a time on a
    single processor system
  • A process is typically executed until it is
    required to wait for some event
  • e.g. I/O, timer, resource
  • When a process execution is stalled, we want to
    maximise CPU utilization by executing another
    process on the CPU
  • CPU I/O burst cycle
  • Processes alternate between bursts of CPU and I/O
    activity
  • CPU burst times have an exponential frequency
    curve with many short bursts and very few long
    bursts
  • CPU scheduler
  • When the CPU becomes idle, the operating system
    must select another process to execute
  • This is the role of the short-term scheduler or
    CPU scheduler

3
CPU burst time
4
Processes and Threads
  • A process is a running program Code, Data,
    Register,Prgram Counter, Stack (oor heap for
    dynamically allocted memory), Files handle or
    resources
  • Thread the basic unit of CPU Utilization thread
    ID, program counter, register and stack
  • A process is described by OS as a Process Control
    Block

5
Process state transitions
  • Process state transitions
  • Only one process can be in the running state on a
    single processor at any time
  • Multiple processes may be in the waiting or ready
    states

new
terminated
admitted
exit
interrupt
ready
running
dispatch to CPU
wait (e.g.for I/O completion)
waiting
event (e.g. I/O completion)
6
Process execution
  • CPU switch from process to process

7
Process Execution
The ready queue of various I/O devices
8
Process Execution
  • Process scheduling and queue.
  • Short, long, meduim term scheduler
  • Speed problem (10 ms - 100 ms) quantum example
  • CPU bound, I/O bound process

9
Process execution
  • Child process
  • Fork(), exec(), wait()
  • 2 Strategies UNIX fork , Win CreateProcess the
    child process can be a copy of the parent process
    or not..

10
CPU Scheduler
  • CPU scheduler (continued)
  • Candidate processes for scheduling are those
    processes that are in memory and a ready to
    execute
  • Candidate processes are stored on the ready queue
  • The ready queue is not necessarily a FIFO queue,
    but depends on the implementation of the
    scheduler
  • FIFO queue, priority queue, unordered list,
  • Scheduling may be performed in any of these
    situations
  • When a process leaves the running state and
    enters the waiting state, waiting for an event
    (e.g. completion of an I/O operation)
  • When a process leaves the running state and
    enters the ready state ( e.g. when an interrupt
    occurs)
  • When a process leaves the waiting state and
    enters the ready state
  • When a process terminates

11
Preemptive and nonpreemptive scheduling
  • Scheduling must be performed in cases 1 and 4
  • If scheduling only takes place in cases 1 and 4,
    then the scheduling is non-preemptive
  • Otherwise the scheduling is preemptive
  • With a non-preemptive scheme
  • Once a process has entered the running state on a
    CPU, it will remain running until it relinquishes
    the CPU to another process
  • Preemptive scheduling incurs a cost
  • Access to shared data?
  • Synchronization of access to kernel data
    structures?

12
Scheduling criteria
  • CPU utilization
  • Maximize CPU utilization
  • Throughput
  • Maximize number of processes completed per unit
    time
  • (e.g. transaction-based systems)
  • Turnaround time
  • Minimize the time taken to execute a process
  • Waiting time
  • Minimize the time spent on the ready queue
  • Response time
  • Minimize time to beginning of response

13
Scheduling algorithms (review)
  • First-Come First-Served (FCFS) non-preemptive
  • The process that requests the CPU first is the
    first to be scheduled
  • Implemented with a FIFO ready queue
  • When a process enters the ready state, it is
    placed at the tail of the FIFO ready queue
  • When the CPU is free and another process must be
    dispatched to run, the process at the head of the
    queue is dispatched and the running process is
    removed from the queue
  • The average waiting time under FCFS scheduling
    can be quite long and depends on the CPU burst
    times of the processes
  • Consider three processes P1, P2 and P3 that
    appear on the FCFS ready queue in the order P1,
    P2 and P3 and with burst times of 24, 3 and 3
    milliseconds respectively
  • Average waiting time (0 24 27) / 3 17 ms

P1
P2
P3
0
24
27
14
Scheduling algorithms (review)
  • First-Come First-Served (FCFS) (continued)
  • Now consider the same processes, but in the order
    P2, P3, P1 on the ready queue
  • Average waiting time (0 3 6) / 3 3 ms
  • Another example one CPU bound process and many
    I/O bound processes
  • The CPU bound process will obtain and hold the
    CPU for a long time, during which the I/O bound
    processes complete their I/O and enter the ready
    queue
  • The CPU bound process eventually blocks on an
    event, the I/O bound processes are dispatched in
    order and block again on I/O
  • The CPU now remains idle until the CPU bund
    process is again scheduled and the I/O bound
    processes again spend a long time waiting

P1
P2
P3
6
0
3
15
Scheduling algorithms (review)
  • Shortest-Job-First preemptive or non-preemptive
  • Given a set of processes, the process with the
    shortest next CPU burst is selected to execute on
    the CPU
  • Optimizes minimum average waiting time
  • There is an obvious problem with this how do we
    know how long the next CPU burst will be for each
    process?
  • We can use past performance to predict future
    behaviour
  • Estimate the next CPU burst time as an
    exponential average of the lengths of previous
    CPU bursts
  • Let tn be the length of the nth CPU burst for a
    process and let ?n1 be our estimated next CPU
    burst time
  • ?n1 atn(1- a)?n , a 0 1
  • If a 0, the current prediction is equal to the
    last prediction
  • if a 1, then ?n1 atn and the next CPU burst
    time is estimated to be the length of the last
    CPU burst time
  • We need to set ?0 to a constant value

16
Exponential Average
  • Prediction of the length of the next CPU burst
    using exponential average.
  • The effect of alpha (memory reaction time)

17
Scheduling algorithms (review)
  • Shortest-Job-First preemptive or non-preemptive
  • To see how the value for ?n1 is calculated based
    on past history, we can expand the formula,
    substituting for ?n
  • ?n1 atn (1- a) atn-1 (1- a)j atn-j
    (1- a)n1?0
  • a is typically given the value 0.5
  • SJF can be either preemptive or non-preemptive
  • Preemptive SJF will preempt the currently
    executing process if a process joins the ready
    queue with a smaller next CPU burst time
  • Non-preemptive SJF scheduling will allow the
    currently executing process to complete its CPU
    burst before scheduling the next process
  • Example Consider four processes
  • P1 arrival time 0 next CPU burst time 8
  • P2 arrival time 1 next CPU burst time 4
  • P3 arrival time 2 next CPU burst time 9
  • P4 arrival time 3 next CPU burst time 5

18
Scheduling algorithms (review)
19
Scheduling algorithms (review)
  • Priority scheduling preemptive or
    non-preemptive
  • Shortest-Job-First is a special case of a more
    general priority-based scheduling scheme
  • Each process has an associated priority
  • The process with the highest priority is executed
    on the CPU
  • Processes with the same priority are executed in
    the order in which they appear on the ready queue
    (FCFS order)
  • What do we mean by high and low priority
  • Well assume high-priority processes have lower
    priority values than lower priority processes
  • Doesnt matter
  • The priority of a process can be defined either
    internally or externally
  • Internally defined priories are based on
    measurements associated with the process, such as
    the ratio of I/O burst times to CPU burst times,
    the estimated next CPU burst time, time since
    last scheduling
  • Externally defined priorities are based on
    properties external to the operating system user
    status, wealth, political affiliation,

20
Scheduling algorithms (review)
  • Priority scheduling preemptive or
    non-preemptive
  • Like SJF scheduling, priority scheduling can be
    either preemptive or non-preemptive
  • Preemptive the currently executing process is
    preempted if another higher priority process
    arrives on the ready queue
  • Non-preemptive the currently executing process
    is allowed to complete its current CPU burst
  • Priority scheduling has the potential to starve
    low priority processes of access to the CPU
  • A possible solution is to implement an aging
    scheme
  • Periodically decrement the priority value of
    waiting processes (increasing their priority)

21
Scheduling algorithms (review)
  • Round-Robin Scheduling
  • First-Come, First-Served with preemption
  • Define a time quantum or time slice, T
  • Every T seconds, preempt the currently executing
    process, place it at the tail of the ready queue
    and schedule the process at the head of the ready
    queue
  • New processes are still added to the tail of the
    ready queue when they are created
  • The currently executing process may relinquish
    the CPU before it is preempted (e.g. by
    performing I/O or waiting for some other event)
  • The next process at the head of the ready queue
    is scheduled and a new time quantum is started
  • For a system with n processes, there is an upper
    limit on the time before a process is given a
    time quantum on the CPU
  • (n 1) q

22
Scheduling algorithms (review)
  • Round-Robin Scheduling
  • The performance of Round-Robin scheduling depends
    on the size of the time quantum, T
  • For very large values of T, the performance (and
    behaviour) of Round-Robin scheduling approaches
    that of FCFS
  • As T approached zero, processes execute just one
    instruction per quantum
  • Obviously for very small values of T, the time
    taken to perform a context switch needs to be
    considered
  • T should be large with respect to the time
    required to perform a context switch

23
Turnaround Time and Time quantum
  • Turnaround Time and Time quantum.

24
Scheduling algorithms (review)
  • Multi-level queue scheduling
  • Often we can group processes together based on
    their characteristics or purpose
  • system processes
  • interactive process
  • low priority interactive processes
  • background processes

system processes
interactive processes
low priority interactive processes
background processes
25
Scheduling algorithms (review)
  • Multi-level queue scheduling
  • Processes are permanently assigned to these
    groups and each group has an associated queue
  • Each process group queue has an associated
    queuing discipline
  • FCFS, SJF, priority queuing, Round-Robin,
  • Additional scheduling is performed between the
    queues
  • Often implemented as fixed-priority pre-emptive
    scheduling
  • Each queue may have absolute priority over lower
    priority queues
  • In other words, no process in the background
    queue can execute while there are processes in
    the interactive queue
  • If a background process is executing when an
    interactive process arrives in the interactive
    queue, the background process will be preempted
    and the interactive process will be scheduled
  • Alternatively, each queue could be assigned a
    time slice
  • For example, we might allow the interactive queue
    80 of the CPU time, leaving the remaining 20
    for the other queues

26
Scheduling algorithms (review)
  • Multi-level feedback queue scheduling
  • Unlike multi-level queue scheduling, this scheme
    allows processes to move between queues
  • Allows I/O bound processes to be separated from
    CPU bound processes for the purposes of
    scheduling
  • Processes using a lot of CPU time are moved to
    lower priority queues
  • Processes that are I/O bound remain in the higher
    priority queues
  • Processes with long waiting times are moved to
    higher priority queues
  • Aging processes in this way prevents starvation
  • Higher priority queues have absolute priority
    over lower priority queues
  • A process arriving in a high-priority queue will
    preempt a running process in a lower priority
    queue

T 8
T 16
FCFS
27
Scheduling algorithms (review)
  • Multi-level feedback queue scheduling
  • When a process arrives on the ready queue (a new
    process or a process that has completed waiting
    for an event), it is placed on the highest
    priority queue
  • Processes in the highest priority queue have a
    quantum of 8ms
  • If a high priority process does not end its CPU
    burst within 8ms, it is preempted and placed on
    the next lowest priority queue
  • When the highest priority queue is empty,
    processes in the next queue are given 16ms time
    slices, and the same behaviour applies
  • Finally, processes in the bottom queue are
    serviced in FCFS order
  • We can change
  • number of queues
  • queuing discipline
  • rules for moving process to higher or lower
    priority queues

28
Evaluating scheduling algorithms
  • Deterministic modelling
  • Given a pre-defined workload, we can compare the
    performance of different scheduling schemes by
    applying the schemes to the workload
  • But how do we know if our workload is typical
  • Will small changes in the workload result in
    large changes in performance?
  • Queuing theory
  • Apply techniques from queuing theory to the
    performance evaluation of scheduling schemes,
    based on statistical properties of a system
    workload
  • Distribution of process CPU or I/O burst times
  • Mean process arrival rate and distribution of
    interarrival times
  • For example, if we know the average arrival rate
    of processes in the ready queue and the average
    waiting time of these processes, we can determine
    the average number of processes in the ready
    queue using Littles formula (L ?W)

29
Evaluating scheduling algorithms
  • Simulation
  • We can develop a software model of the behaviour
    of the computer system and scheduling policy
  • We can then apply an input to the model and
    determine the performance of a scheduling
    algorithm from the results
  • How do we generate the input
  • Statistically based on the same statistics used
    with queuing theory, we can generate random
    variates for burst times, process arrival rates,
    etc., and apply this as the input to our model
  • Trace-driven simulation Gather data about a
    workload on a real system and apply the trace as
    the input to our model
  • The simulation can be performed in compressed
    time
  • Simulation requires verification of the
    correctness of the model
  • Generating the input either statistically or from
    a trace can both have disadvantages

30
Case study Windows 2000
  • Priority-based preemptive scheduling of threads
  • Thread priority
  • A threads priority (1...31) is determined by the
    priority of its parent process and its priority
    relative to that of its parent process

31
Case study Windows 2000
  • Priority-based preemptive scheduling
  • No thread will be scheduled while a higher
    priority thread is waiting
  • An executing thread will be preempted when a
    higher priority thread arrives in the ready queue
  • Threads at the same priority level are serviced
    in round-robin order
  • Variable and real-time priority classes
  • Threads of non-real-time processes have variable
    priority
  • Threads are scheduled for some time slice
  • If a thread uses its full time slice and is
    preempted, its priority is decremented, but not
    below its base value (normal thread priority by
    default) prevents CPU bound threads hogging
    the CPU
  • When a waiting thread completes its wait and
    re-enters the ready queue, its priority is
    boosted by a variable amount, depending on the
    reason the wait completed (e.g. disk I/O small
    boost, keyboard/mouse big boost, sound really
    big boost)
  • Overall, the scheme give excellent response times
    to interactive processes, good response times to
    I/O bound processes and any spare CPU cycles are
    used by CPU bound processes

32
Case study Windows 2000
preempted before quantum end (e.g. by higher
priority thread)
priority
quantum
wait complete priority boost
preempt
base priority
run
wait
run
ready
run
time
33
Case study Windows 2000
  • Quantum Accounting
  • A process runs for some quantum on the CPU before
    the operating preempts it and evaluates
  • the new priority for the process (aging the
    process)
  • whether another process at the same or higher
    priority is ready
  • Each thread has an associated quantum value, Q
  • Not the actual length of the time slice, but an
    integer number of quantum units
  • By default, threads start with Q6 on
    workstations and Q36 on servers (to reduce
    context switching for transaction-based systems)
  • Each time a clock interrupt occurs, the clock
    interrupt handler deducts 3 from Q when Q0,
    the scheduling algorithm is invoked
  • The clock interval varies among hardware
    platforms, but typical values are 10ms and 15ms
  • Why deduct 3? Q can be reduced by smaller units
    under certain circumstances (e.g. waking from a
    wait operation)

34
Case study Windows 2000
  • Quantum Accounting
  • Windows differentiates between the foreground
    process and other background processes
  • They are in the normal priority class
  • The foreground processs threads receives a
    quantum boost, typically by some factor
    (usually 3)

35
Multiprocessor issues
  • Flynns taxonomy of parallel processor systems
  • Single instruction single data (SISD)
  • Single instruction multiple data (SIMD)
  • vector / array processor, parallel?
  • Multiple instruction single data (MISD)
  • argument over precise meaning (pipelined
    processor ???)
  • never implemented
  • Multiple instruction multiple data (MIMD)
  • shared memory (tightly coupled)
  • asymmetric (master/slave)
  • symmetric (SMP)
  • distributed memory (loosely coupled)
  • cluster

36
Multiprocessor issues
  • Asymmetric multiprocessing (master/slave)
  • OS kernel (scheduling, I/O, ) code always runs
    on a master processor
  • Slave processors run user-level code only
  • When a user-level process needs to invoke a OS
    service, a request is sent to the master
  • Disadvantages
  • master is single point of failure
  • master is a performance bottleneck
  • Symmetric multiprocessing (SMP)
  • Kernel code executes on any processor
  • Processors are self-scheduling
  • Disadvantage
  • complex synchronization

37
Multiprocessor issues
  • Processor affinity
  • If a process can execute on any processor, what
    are the implications for cache performance?
  • Most SMP systems implement a processor affinity
    scheme
  • soft affinity an attempt is made to schedule a
    process to execute on the CPU it last executed on
  • hard affinity processes are always executed on
    the same CPU
  • Load balancing
  • Scheduler can use a common ready queue for all
    processors or use private per-processor ready
    queues
  • If per-processor ready queues are used (and no
    affinity or soft-affinity is used), then
    load-balancing between CPUs becomes an issue
  • Two approaches
  • pull-migration idle processors pull ready jobs
    from other processors
  • push-migration load on each processor is
    monitored and jobs are pushed from overloaded
    processors to under-loaded processors
  • Load-balancing competes with affinity
Write a Comment
User Comments (0)
About PowerShow.com