Title: Processes
1Processes
- Operating Systems
- Spring 2004
2What is a process?
- An instance of an application execution
- Process is the most basic abstractions provided
by OS - An isolated computation context for each
application - Computation context
- CPU state address space environment
3CPU stateRegister contents
- Process Status Word (PSW)
- exec. mode, last op. outcome, interrupt level
- Instruction Register (IR)
- Current instruction being executed
- Program counter (PC)
- Stack pointer (SP)
- General purpose registers
4Address space
- Text
- Program code
- Data
- Predefined data (known in compile time)
- Heap
- Dynamically allocated data
- Stack
- Supporting function calls
5Environment
- External entities
- Terminal
- Open files
- Communication channels
- Local
- With other machines
6Process control block (PCB)
PCB
CPU
kernel
user
state
PSW
text
memory
IR
files
data
PC
accounting
heap
priority
SP
user
general purpose registers
CPU registers storage
stack
7Process States
terminated
running
schedule
wait for event
preempt
created
ready
blocked
event done
8UNIX Process States
running user
schedule
sys. call interrupt
return
ready user
zombie
interrupt
running kernel
terminated
preempt
wait for event
schedule
created
ready kernel
blocked
event done
9Multiprocesses mechanisms
- Context switch
- Create process and dispatch (in Unix,
fork()/exec()) - End process (in Unix, exit() and wait())
10Threads
- Thread an execution within a process
- A multithreaded process consists of many
co-existing executions - Separate
- CPU state, stack
- Shared
- Everything else
- Text, data, heap, environment
11Thread Support
- Operating system
- Advantage thread scheduling done by OS
- Better CPU utilization
- Disadvantage overhead if many threads
- User-level
- Advantage low overhead
- Disadvantage not known to OS
- E.g., a thread blocked on I/O blocks all the
other threads within the same process
12Multiprogramming
- Multiprogramming having multiple jobs
(processes) in the system - Interleaved (time sliced) on a single CPU
- Concurrently executed on multiple CPUs
- Both of the above
- Why multiprogramming?
- Responsiveness, utilization, concurrency
- Why not?
- Overhead, complexity
13Responsiveness
Job 1 arrives
Job 2 arrives
Job 3 arrives
Job1
Job2
Job3
14Workload matters!
- Would CPU sharing improve responsiveness if all
jobs were taking the same time? - No. It makes it worse!
- For a given workload, the answer depends on the
value of coefficient of variation (CV) of the
distribution of job runtimes - CVstand. dev. / mean
- CV lt 1 gt CPU sharing does not help
- CV gt 1 gt CPU sharing does help
15Real workloads
- Exp. Dist CV1Heavy Tailed Dist CVgt1
- Dist. of job runtimes in real systems is heavy
tailed - CV ranges from 3 to 70
- Conclusion
- CPU sharing does improve responsiveness
- CPU sharing is approximated by
- Time slicing interleaved execution
16Utilization
1st I/O operation
I/O ends
2nd I/O operation
I/O ends
3rd I/O operation
idle
CPU
idle
idle
Disk
idle
idle
idle
CPU
idle
Job1
Job2
Job1
Job2
Disk
idle
Job1
Job2
idle
idle
Job1
17Workload matters!
- Does it really matter?
- Yes, of course
- If all jobs are CPU bound (I/O bound),
multiprogramming does not help to improve
utilization - A suitable job mix is created by a long-term
scheduling - Jobs are classified on-line to be CPU (I/O) bound
according to the jobs history
18Concurrency
- Concurrent programming
- Several process interact to work on the same
problem - ls l more
- Simultaneous execution of related applications
- Word Excel PowerPoint
- Background execution
- Polling/receiving Email while working on smth else
19The cost of multiprogramming
- Switching overhead
- Saving/restoring context wastes CPU cycles
- Degrades performance
- Resource contention
- Cache misses
- Complexity
- Synchronization, concurrency control, deadlock
avoidance/prevention
20Short-Term Scheduling
terminated
running
schedule
wait for event
preempt
created
ready
blocked
event done
21Short-Term scheduling
- Process execution pattern consists of alternating
CPU cycle and I/O wait - CPU burst I/O burst CPU burst I/O burst...
- Processes ready for execution are hold in a ready
(run) queue - STS schedules process from the ready queue once
CPU becomes idle
22Metrics Response time
- Response time (turnaround time) is the average
over the jobsTresp
Job terminates/ blocks waiting for I/O
Job arrives/ becomes ready to run
Starts running
Trun
Twait
Tresp
Tresp Twait Trun
23Other Metrics
- Wait time average of Twait
- This parameter is under the system control
- Response ratio or slowdown
- slowdownTresp / Trun
- Throughput, utilization depend on user imposed
workloadgt - Less useful
24Note about running time (Trun)
- Length of the CPU burst
- When a process requests I/O it is still running
in the system - But it is not a part of the STS workload
- STS view I/O bound processes are short processes
- Although text editor session may last hours!
25Off-line vs. On-line scheduling
- Off-line algorithms
- Get all the information about all the jobs to
schedule as their input - Outputs the scheduling sequence
- Preemption is never needed
- On-line algorithms
- Jobs arrive at unpredictable times
- Very little info is available in advance
- Preemption compensates for lack of knowledge
26First-Come-First-Serve (FCFS)
- Schedules the jobs in the order in which they
arrive - Off-line FCFS schedules in the order the jobs
appear in the input - Runs each job to completion
- Both on-line and off-line
- Simple, a base case for analysis
- Poor response time
27Shortest Job First (SJF)
Long job
Short
Short
Long job
- Inherently off-line
- All the jobs and their run-times must be
available in advance
28Preemption
- Preemption is the action of stopping a running
job and scheduling another in its place - Context switch Switching from one job to another
29Using preemption
- On-line short-term scheduling algorithms
- Adapting to changing conditions
- e.g., new jobs arrive
- Compensating for lack of knowledge
- e.g., job run-time
- Periodic preemption keeps system in control
- Improves fairness
- Gives I/O bound processes chance to run
30Shortest Remaining Time first (SRT)
- Job run-times are known
- Job arrival times are not known
- When a new job arrives
- if its run-time is shorter than the remaining
time of the currently executing job - preempt the currently executing job and schedule
the newly arrived job - else, continue the current job and insert the new
job into a sorted queue - When a job terminates, select the job at the
queue head for execution
31Round Robin (RR)
- Both job arrival times and job run-times are not
known - Run each job cyclically for a short time quantum
- Approximates CPU sharing
Job 1 arrives
Job 2 arrives
Job 3 arrives
Job 1 terminates
Job 2 terminates
Job 3 terminates
Job1
3
2
1
2
3
1
2
3
1
2
1
Job2
32Responsiveness
Job 1 arrives
Job 2 arrives
Job 3 arrives
Job1
Job2
Job3
33Priority Scheduling
- RR is oblivious to the process past
- I/O bound processes are treated equally with the
CPU bound processes - Solution prioritize processes according to their
past CPU usage
- Tn is the duration of the n-th CPU burst
- En1 is the estimate of the next CPU burst
34Multilevel feedback queues
terminated
new jobs
quantum10
quantum20
quantum40
FCFS
35Multilevel feedback queues
- Priorities are implicit in this scheme
- Very flexible
- Starvation is possible
- Short jobs keep arriving gt long jobs get starved
- Solutions
- Let it be
- Aging
36Priority scheduling in UNIX
- Multilevel feedback queues
- The same quantum at each queue
- A queue per priority
- Priority is based on past CPU usage
- pricpu_usebasenice
- cpu_use is dynamically adjusted
- Incremented each clock interrupt 100 sec-1
- Halved for all processes 1 sec-1
37Fair Share scheduling
- Given a set of processes with associated weights,
a fair share scheduler should allocate CPU to
each process in proportion to its respective
weight - Achieving pre-defined goals
- Administrative considerations
- Paying for machine usage, importance of the
project, personal importance, etc. - Quality-of-service, soft real-time
- Video, audio
38Perfect Fairness
A fair share scheduling algorithm achieves
perfect fairness in time interval (t1,t2) if
39Wellness criterion for FSS
- An ideal fair share scheduling algorithm achieves
perfect fairness for all time intervals - The goal of a FSS algorithm is to receive CPU
allocations as close as possible to a perfect FSS
algorithm
40Fair Share scheduling algorithms
- Weighted Round Robin
- Shares are not uniformly spread in time
- Lottery scheduling
- Each process gets a number of lottery tickets
proportional to its CPU allocation - The scheduler picks a ticket at random and gives
it to the winning client - Only statistically fair, high complexity
41Fair Share scheduling VTRR
- Virtual Time Round Robin (VTRR)
- Order ready queue in the order of decreasing
shares (the highest share at the head) - Run Round Robin as usual
- Once a process that exhausted its share is
encountered - Go back to the head of the queue
42Multiprocessor Scheduling
- Homogeneous vs. heterogeneous
- Homogeneity allows for load sharing
- Separate ready queue for each processor or common
ready queue? - Scheduling
- Symmetric
- Master/slave
43A Bank or a Supermarket?
departing jobs
CPU1
CPU1
shared queue
departing jobs
CPU2
CPU2
arriving jobs
arriving jobs
departing jobs
CPU3
CPU3
departing jobs
CPU4
CPU4
M/M/4
4 x M/M/1
44It is a Bank!