Title: Chapter 6: Process Synchronization
1Chapter 6 Process Synchronization
2What is the output?
2 separate runs of the same program
this is some output from dolphins are large
brained mammals the parent thread
this is some output from the parent thread
3Issue Race Condition
- Definition
- When different computational results (e.g.,
output, values of variables) occur depending on
the particular timing and resulting order of
execution of statements across separate threads
or processes - General solution strategy
- Need to ensure that only one process or thread is
allowed to change a variable or I/O device until
that process has completed its required sequence
of operations - In general, a thread needs to perform some
sequence of operations on an I/O device or data
structure to leave it in a consistent state,
before the next thread can access the I/O device
or data structure
4Therac-25
- Between June 1985 and January 1987, some cancer
patients being treated with radiation were
injured killed due to faulty software - Massive overdoses to 6 patients, killing 3
- Software had a race condition associated with
command screen - Software was improperly synchronized!!
- See also
- p. 340-341 Quinn (2006), Ethics for the
Information Age - Nancy G. Leveson, Clark S. Turner, "An
Investigation of the Therac-25 Accidents,"
Computer, vol. 26, no. 7, pp. 18-41, July 1993 - http//doi.ieeecomputersociety.org/10.1109/MC.1993
.274940
5Chapter 6 Process Synchronization
- Critical Section (CS) problem
- Partial solutions to CS problem
- Busy waiting
- Semaphores
- Atomicity
- Hardware methods for atomicity
- Multiple CPU systems
- Classic problems
- Bounded buffer, producer-consumer,
readers-writers, dining philosophers - Monitors
6Critical Section Problem
do
entry section
critical section
exit section
remainder section
while (TRUE)
Figure 6.1 General structure of a typical
process Pi
7Critical Section
- N processes (P1, P2, , PN) are accessing shared
data (e.g., RAM, or shared files). - Access typically involves reads and writes to
data - The program code where shared data is accessed in
a process is the critical section of that process - General Goal ensure that only one of P1, P2, ,
PN are in their critical sections at any
particular time - Can loosen this requirement at times if processes
are only reading the data - Main requirements for entry/exit methods (see p.
220 of text) - 1) Provide mutual exclusion
- 2) Allow progress
- 3) Have bounded waiting
8Main requirements for entry/exit methods (see p.
228 of text)
- mutual exclusion
- If process Pi is executing in its critical
section, then no other process can be executing
in its critical sections - progress
- If no process is executing in its critical
section and some processes wish to enter their
critical sections, then only those processes that
are not executing in their remainder sections can
participate in deciding which will enter its
critical section next, and this selection cannot
be postponed indefinitely - bounded waiting
- There exists a bound, or limit, on the number of
times that other processes are allowed to enter
their critical sections after a process has made
a request to enter its critical section and
before that request is granted
9How do we solve the critical section problem?
- Look at two partial solutions, then a full
solution - Trying to solve this problem from scratch
- Not utilizing system calls in solutions here
- Just trying to use our usual programming concepts
to solve this problem - Assume assignment statements are completed
atomically - Consider main requirements
- 1) mutual exclusion 2) progress 3) bounded
waiting - Need to also look at other possible problems for
any particular solution - Well introduce more general, typical, techniques
afterwards
10Partial Solution 1 to Critical Section Problem
// global variable // initialized before threads
start int turn 0 // number of thread that can
enter CS next
- // thread 0
- do
- while (turn ! 0)
- // c. s.
- turn 1
- // remainder code
- while (true)
// thread 1 do while (turn ! 1) // c.
s. turn 0 // remainder code while (true)
Do we have mutual exclusion? Progress? Bounded
waiting?
11Partial Solution 2 to Critical Section Problem
// initialization// flagi set if thread i
wants access to CSbool flag2 false, false
- // thread 0
- do
- flag0 true
- while (flag1)
- // c. s.
- flag0 false
- // other code
- while (true)
// thread 1 do flag1 true while
(flag0) // c. s. flag1 false //
other code while (true)
Do we have mutual exclusion? Progress? Bounded
waiting?
12A Full Solution to Critical Section Problem (p.
230)
// initialization // flagi set if thread i
wants access to CS bool flag2 false,
false // number of thread that can enter CS
next int turn 1
// thread 1 do flag1 true turn
0 while (flag0 turn 0) // c.
s. flag1 false // other code while
(true)
- // thread 0
- do
- flag0 true
- turn 1
- while (flag1 turn 1)
- // c. s.
- flag0 false
- // other code
- while (true)
Do we have mutual exclusion? Progress? Bounded
waiting?
13In Summary
- We have a solution to the critical section
problem - However, there are more general, typical solutions
14Issues with this critical section solution
- Complex code
- Its hard to figure out if the code is correct
- Busy waiting
- Busy wait (aka. polling) In a loop, continuously
checking the value of variables - These processes are doing a busy wait until
allowed into critical section - Busy waiting is not the same as a process being
blocked. Why? - Threads actively take CPU cycles when waiting for
other threads to exit from critical section
15Another Busy Waiting Example
16Busy Wait Loop
17Complex Code Think About Critical Section (CS)
Problem More Abstractly
- We want two operations
- enter()
- perform operations needed so only current thread
can access CS - exit()
- Perform operations to enable other threads to
access CS
do enter() // CS exit() // other code
while (true)
Further Issue What if we have multiple processes
trying to enter? Or what if we have multiple
critical sections?
18Introduce a Parameter
do enter(s) // CS exit(s) // other code
while (true)
- enter(s)
- perform operations needed so only current thread
can access CS - exit(s)
- Perform operations to enable other threads to
access CS
(traditionally parameter is called a semaphore)
19Semaphores With Busy Waiting
- Traditionally
- enter() is called P (or wait)
- exit() is called V (or signal)
- Data type of the parameter is called semaphore
- semaphore (int value)
- Semaphore constructor
- Integer value
- number of processes that can enter critical
section without waiting often initialized to 1.
- void V(semaphore S)
- Increment S-gtvalue
- void P(semaphore S)
- while (S-gtvalue lt 0)
- Decrement S-gtvalue
Often not a practical implementation Uses a busy
wait
20Need Atomic Operations
- Need some level of indivisible or atomic
sequences of operations - E.g., when an operation runs, it completes
without another process executing the same code,
or the same critical section - In P, we must test decrement value of semaphore
in one step - e.g., with semaphore value of 1, dont want two
processes to test value of S, find that it is gt
0, and both decrement and value of S - Plus, increment operation must be atomic
21Critical Section Solution with (Busy Wait)
Semaphores
semaphore mutex new semaphore(1) //
initialization
- // thread 1
- do
- P(mutex)
- // c. s.
- V(mutex)
- // other code
- while (true)
// thread 2 do P(mutex) // c.
s. V(mutex) // other code while (true)
Does not have bounded waiting there is a race
condition
Do we have mutual exclusion? Progress? Bounded
waiting?
22Second Issue
- Busy waiting
- Threads actively take CPU cycles when using the
wait operation to gain access to the critical
section - This type of semaphore is called a spin lock
because the process spins (continually uses CPU)
while waiting for the semaphore (the lock) - How can we solve this? That is, how can we not
use a busy wait to implement semaphores?
23Semaphores Typical Implementation
- Add a queue of waiting processes to the data
structure for each semaphore - semaphore (int value)
- Semaphore constructor
- Integer value
- number of processes that can enter critical
section without waiting often initialized to 1. - Data structure includes a queue of waiting
processes
- void P(semaphore S)
- Decrement S-gtvalue
- If S-gtvalue lt 0, then
- blocks calling process on S-gtqueue
- void V(semaphore S)
- Increment S-gtvalue
- If S-gtvalue lt 0, then
- Wake up a process blocked on S-gtqueue
Semaphore operations (P V) must be executed
atomically
24Example
- Construct a queue (FIFO) data structure that can
be used by two threads to access the queue data
in a synchronized manner - Code this in C with Semaphores as your
synchronization mechanism - i.e., assume you have a Semaphore class, with P
and V operations - Use STL queue as your data structure
- It has methods front(), back(), push(), pop(),
size()
25(No Transcript)
26(No Transcript)
27(No Transcript)
28(No Transcript)
29Running Program
- Use the terminal window (Unix) command called
top to look at the amount of RAM memory usage
over time - The amount of RAM continuously increases over time
30(No Transcript)
31Result
What is the problem?
32Resolving the Problem
- There is a race condition
- Because the consumer prints, this slows the
consumer down - The producer thus fills up the linked list more
rapidly than the consumer takes items from the
linked list - A reasonable solution is to bound (limit) the
number of items that can be in the list - Exercise Try to solve this using semaphores
33Can solve this with a counting semaphore
- Initialize a semaphore to the number of elements
in the list - keep track of the remaining slots to be filled in
the list - Call P on Add on this counting semaphore
- when the list has no remaining slots, Add will
block - Call V on Remove on this counting semaphore
34(No Transcript)
35Hardware Methods for Atomic Instruction Sequences
- Remember that semaphore P V must be executed
atomically - But how can we do this?
- Single CPU system
- Turn off interrupts for brief periods
- Not a general solution for critical sections
- But, can be used to implement short critical
sections (e.g., P V implementation of
semaphores) - Why is this OK only for short critical sections?
- May not be suitable for a multiple CPU system
- May have to send message to all other CPUs to
indicate interrupts are turned off, which may be
time consuming - Goal of only turning off interrupts for brief
periods will likely be violated
36Multiple CPU systems
- Typically use TestAndSet or Swap instructions
- Enable process to know that it (not another
process) changed a variable from false to true - Multiprocessors often implement these
instructions - Swap(A, B)
- Atomically swap values of variables A B
- TestAndSet(lock)
- Atomically returns current value of lock and
changes it to true - If two of these instructions start to execute
simultaneously on different CPUs, they will
execute sequentially in some order across CPUs,
instruction is atomic - Use of these instructions for critical sections
requires a busy wait not a general solution for
critical sections - But, can be used to implement atomic P/V for
semaphores
37Mutual-Exclusion with Swap (Fig. 6.7)
- // lock initialization occurs once, across all
processes - boolean lock false // variable shared amongst
processes CPUs - boolean key // variable not shared access by
this thread only - // In the following, at most 1 process will have
key false. - // If the process has key false, then it can
access the CS, - // Otherwise it busy waits until key false.
- do
- key true
- while (key true)
- Swap(lock, key)
-
- // Critical Section goes here
- lock false // assuming atomic assignment
across processors - // Remainder section
- while (1)
The question being asked is Am I the process
that changed lock to true from false?
Can be used to provide mutual exclusion in the CS
of a semaphore
38Mutual-Exclusion with TestAndSet (Fig. 6.5)
- // lock initialization occurs once, across all
processes - boolean lock false // shared amongst processes
- // Process that succeeds in changing lock from
false to - // true gains access to the CS.
- // The others busy wait until lock returns to
false. - do
- // TestAndSet returns current value of lock
changes it to true - while (TestAndSet(lock))
- // Critical section goes here
- lock false // assumed atomic
- // Remainder section
- while (1)
Can be used to provide mutual exclusion in the CS
of a semaphore
39More Synchronization Examples
- Semaphores
- Bounded buffer
- Readers-writers
- Dining philosophers
- Semaphores using condition variables
40Classic Problem Bounded Buffer
- One process consuming items from a buffer
- Other process producing items into a buffer
- Need to ensure proper behavior when buffer is
- Full Blocks producer
- Empty Blocks consumer (improvement over previous
solution) - Need to provide mutually exclusive access to
buffer (e.g., queue) - Can solve with semaphores
- Illustrates some different uses of semaphores
mutual exclusion, and counting
41Bounded Buffer Problem
// semaphores data buffer shared across
threads semaphore empty(BUFFER_LENGTH) // number
of empty slots semaphore full(0) // number of
full slots semaphore mutex(1) // mutual
exclusive access to buffer
- // producer thread
- do
- // produce item
- wait(empty)
- wait(mutex)
- // c. s.
- // add item to buffer
- signal(mutex)
- signal(full)
- while (true)
// consumer thread do wait(full) wait(mutex) /
/ c. s. // remove item from buffer signal(mutex)
signal(empty) // consume item while (true)
42Classic Problem Readers-Writers
- Multiple reader processes accessing data (e.g., a
file) - Single writer can be writing file
- Sometimes not just one process in critical
section!
43Readers-Writers
// data and semaphores shared across
threads semaphore wrt(1) // 1 writer or gt1
readers semaphore mutex(1) // for test change
of readcount int readcount 0 // number of
readers
- // writer process
- wait(wrt)
- // code to perform
- // writing
- signal(wrt)
// reader process wait(mutex) readcount if
(readcount 1) // first one in? wait(wrt) sign
al(mutex) // code to perform reading wait(mutex)
readcount-- If (readcount 0) // last reader
out? signal(wrt) signal(mutex)
Reader priority solution
44More Synchronization Abstractions
- Using semaphores can be tricky!
- E.g., the following code can produce a deadlock
- this is highly undesirable!
// initialization shared across
threadssemaphore s(1), q(1)
// thread 1 wait(s) wait(q) signal(s) signal(
q)
// thread 2 wait(q) wait(s) signal(q) signal(
s)
45More Synchronization Abstraction Monitors
- Automatically ensures only one thread can be
active within monitor - One thread has lock on monitor Gain entry by
calling methods - Effectively, a synchronized class structure where
only one thread can be accessing a method on a
specific object instance at one time - Condition variables
- Use one of these for each reason you have for
waiting - Enable explicit synchronization
- wait and signal operations (different than
semaphore operations)
46Monitors Condition Variables
- Wait
- Invoking thread is suspended until another thread
calls signal on the same condition variable - Gives up lock on monitor other threads can enter
- Signal (e.g., notify in Java)
- Resumes exactly one suspended process blocked on
condition - No effect if no process suspended
- Choice about which process gets to execute
- Resumed process regains lock on monitor when
signaling method finishes - Java
- Wait and notify is with respect to an object
- If you call wait or notify within a class you are
saying this.wait or this.notify, and the wait or
notify is with respect to this object - Simplified idea of condition variables
- But, can use Condition interface (as we discussed
before)
47General Monitor Syntax
- monitor monitor-name
- // shared variable declarations
- method m1()
- .
-
- .
- method mN()
- .
-
- Initialization code ()
-
Mutual exclusion across methods within the
monitor for particular object instance only a
single thread can be executing a method on the
object
48wait style
- Usually, do
- while (boolean expression)
- wait()
- Not,
- if (boolean expression)
- wait
49Example
- Solve the bounded buffer problem using a Monitor
- With the wait and signal Monitor operations
50(No Transcript)
51(No Transcript)
52Example (2)
- Now, use two conditions
- One condition for Adding
- Wait on this condition while the queue is full
- And one condition for Removing
- Wait on this condition while the queue is empty
53(No Transcript)
54(No Transcript)
55(No Transcript)
56Classic Problem - 3
- Dining-Philosophers
- 5 philosophers, eating rice, only 5 chopsticks
- Pick up one chopstick at a time
- What happens if each philosopher picks up a
chopstick and tries to get a second?
57Dining Philosophers Using Monitors
- Observation
- A philosopher only eats when both neighbors are
not eating - pickup(i)
- Start to eat only when both neighbors are not
eating - putdown(i)
- Enable neighbors to eat if they are hungry
- do
- dp.pickup(i)
- // eat
- dp.putdown(i)
- // think
- while (true)
58(p. 249 of text)
59(No Transcript)
60Example
- Implement semaphores using Monitors
- There is some subtlety to the implementation
61(No Transcript)
62(No Transcript)
63(No Transcript)
64- Semaphores using Java a little tricky
65(No Transcript)
66(No Transcript)
67(No Transcript)
68User Space Synchronization Provided by OSs and
Libraries
- Linux (current kernel version)
- POSIX semaphores (shared memory non-shared
memory) - futex wait for value at memory address to change
- Windows XP thread synchronization
- provides dispatcher objects
- mutexes (semaphores with value 1), semaphores,
events (like condition variables), and timers can
be used with dispatcher objects - Pthreads
- has conditions and mutexes