Title: Mutual Exclusion with notes in notes section
1Mutual Exclusion with notes in notes section
Concurrency Mutual Exclusion The need for
mutual exclusion comes with concurrency. Types
of Concurrent Execution Interrupt handlers
Interleaved preemptively scheduled
processes/threads Multiprocessor clusters,
with shared memory (usually SMP) Distributed
systems
2Concurrent Exec Multiprocessor
Most extreme form of concurrency is truly
parallel execution, utilizing processors.
3Concurrent Execution - Single Processor
4Preemption
- Processors are generally assumed to be
preemptable while some other resources are not
preemptable - i.e., mutual exclusion must be enforced between
users
5The Need for Mutual Exclusion Shared Global
Resources
Examples concurrent transactions variables
shared between processes such as echo()
procedure in Stallings's text The procedure,
echo(), reads in a character from a keyboard
and prints it back out again to a monitor. It
shares a resource ie a buffer that is being
filled by the keyboard being emptied by the
monitor
6Mutual Exclusion Example A Race Between Threads
Execute A and B concurrently with different
tasks. M difference between number of A and B
calls volatile int M 0 void A () ... M
M 1 ... // increments M void B ()
... M M - 1 ... // decrements M Can M be
correct without ME? NO Lets take a look at
the operations atomicity. -- What does atomicity
mean.
7Terminology
- atomic means cannot be divided into smaller unit
-
- an atomic operation
- cannot be interrupted
- executes to completion, with its full effect, or
does not execute at all - cannot be interleaved with another operations on
the same CPU - cannot be interleaved with another operations
that modify the same data
8Atomicity of M M 1
Actual code, when compiled for SPARC
architecture sethi hi(M), o1 or o1,
lo(M), o0 sethi hi(M), o2 or o2, lo(M),
o1 ld o1, o2 add o2, 1, o1 st o1,
o0 On a simplified abstract machine Ra
M / load register Ra from memory location of M
/ Ra / increment register / M Ra
/ store value in register back to memory /
9No Interleaving Task A First
10No Interleaving Task B First
11Parallel Execution
With parallel or interleaved execution, what
happens? We have a race between tasks A and
B. The execution effect depends on who "wins" the
race.
12If B Preempts A
13If A Preempts B
14Mutexes
- A type of object that occupies memory
- Has two states Locked and Unlocked
- If locked, has an associated thread that is
holding the lock (called variously the current
holder or owner) of the mutex. - Lock operation
- tries to seize ownership, if the mutex is
Unlocked - otherwise, blocks caller until the mutex becomes
Unlocked and tries to seize ownership again - the caller proceeds as soon as it is able to
seize the lock - Unlock operation
- should only be called by the current owner
- Unlocks the mutex and gives up ownership
- allows one of the other tasks that are blocked by
Lock operations on the same mutex (if any) to
seize it
15Critical Section Protectionwith a Mutex
pthread_mutex_t My_Mutex ...
pthread_mutex_lock (My_Mutex) ...
critical section ... pthread_mutex_unlock
(My_Mutex)
16Critical Section Protected by Lock/Unlock
pthread_mutex_t mymutex void A
pthread_mutex_lock (mymutex) M M 1
pthread_mutex_unlock
(mymutex) void B
pthread_mutex_lock (mymutex) M M - 1
pthread_mutex_unlock (mymutex)
17With Locking/Unlocking
18Need for Mutual Exclusion - A More Complete
Example
A transaction transfering money from one bank
account to another. void transfer (int amount,
volatile long from_account, volatile long
to_account) to_account to_account
amount from_account from_account -
amount void thread_1 (void arg) ...
transfer (10, B, A) ... void
thread_1 (void arg) ... transfer
(10, A, B) ...
19Examples of Memory Sharing with Threads
sharing_memory.c Shows how concurrent
transactions on shared memory objects can
intefere with one another. It contains two
threads that concurrently update two variables, A
and B. The update operations are intended to keep
the sum A B invariant. The program normally
runs for 10 seconds and then times out, unless it
discovers A B has changed. Since the program
does not do anything to enforce mutual exclusion,
it should eventually find evidence of the
interference and terminate. sharing_memory_safe.c
Shows how a mutex object can be used to enforce
mutual exclusion between threads operating on
shared objects. sharing_memory_deadlock.c Shows
how deadlock can occur when we have multiple
mutexes. sharing_memory_ordered.c Shows how
ordered locking prevents deadlock when we have
multiple mutexes. sharing_files.c Shows how
concurrent access to shared files can also result
in interference.
20Naive (incorrect) Memory Sharing
sharing_memory.c
The file sharing_memory.html contains the same
code as sharing_memory.c , but includes HTML
links to explanations of some of the features
introduced by this example. The program is
intended to demonstrate that a program can really
get into trouble if it contains several threads
that can update the same memory object
concurrently, without any provision for mutual
exclusion.
21Naive (incorrect) Memory Sharing
sharing_memory.c
There are two threads (1) the main program's
initial thread (2) a thread created by the
main program. In the banking example, one
thread moves "money" from the account of A to the
account of B, and the other thread moves "money"
in the other direction. If the system works
correctly, the combined amount of money in both
accounts should be a constant. After each
transaction, the thread that did the transaction
calls the subprogram check_consistency() to see
whether the combined amount has changed. If this
consistency check fails, the process is
terminated with exit status -1. Otherwise, the
two threads go on executing until a maximum time
limit is reached. The time limit is implemented
using the Unix alarm() function. This function
tells the system to generate a SIGALRM signal
after a specified number of seconds have passed.
In this case, the call is alarm (10), meaning
the alarm signal should be generated in 10
seconds.
22Naive (incorrect) Memory Sharing
sharing_memory.c
If the alarm signal is not handled, the process
will be terminated by it. We want the process to
terminate, but we would like to have a chance to
write out a few lines of output first. Therefore,
we attach a signal handler function to the
SIGALARM signal. This is done using the
sigaction() system call sigaction
(SIGALRM, NULL, act) act.sa_handler
handler sigaction (SIGALRM, act,
NULL) The first call fetches a structure
describing the old action in place for signal
SIGALARM. The assignment statement changes the
handler component. The second call to sigaction
swaps in the modified signal action. After that,
whenever SIGALARM is generated for this process,
the function handler() will be called.
23Concepts illustrated by sharing_memory.c
Reentrancy of code Volatile variables Thread
concurrency level Unix signals Signal
handlers sigaction() system call alarm() system
call
24Correct use of memory sharing_memory_safe.c
pthread_mutex_lock (M) transfer (10, B,
A) check_consistency ()
pthread_mutex_unlock (M)
25Concepts Illustrated by sharing_memory_safe.c
pthread_mutex_t, datatype pthread_mutex_init,
constructor pthread_mutex_destroy, destructor
pthread_mutex_loc , blocks until mutex is
available for locking pthread_mutex_unlock
Look at the section on mutexes in the notes on
POSIX threads, or click on the individual links
above for more explanation of the POSIX mutex API.
26Output with Mutual Exclusion
We now add locking around the code that updates
the shared variables. pthread_mutex_lock
(M) transfer (10, A, B)
check_consistency () pthread_mutex_unlock
(M) Executing sharing_memory_safe.c now
produces the following output exiting due to
signal 14count_1 981553610 count_2 929906210
27Preview of Deadlock
- Multiple locks are potentially dangerous
- A thread may hold one lock while waiting for
another, and in turn have other threads waiting
for the lock that it is holding - If a cycle of wait-for relationships develops,
the tasks cannot break out of the cycle - This is a kind of deadlock
- If a thread tries do hold more than one mutex at
a time, a problem known as deadlock can develop.
In deadlock a set of threads becomes locked in a
"deadly embrace" of wait-for relationships, which
can never be resolved nondestructively.
28Something to Avoid a Program that Can Deadlock
- pthread_mutex_lock (mutex_B)
pthread_mutex_lock (mutex_A) transfer (10,
A, B) pthread_mutex_unlock (mutex_B)
pthread_mutex_unlock (mutex_A) - Suppose someone decided it is "elegant" to use a
separate mutex to protect each shared variable,
and so defined a mutex for A and a mutex for B. -
- To do a joint transaction on A and B would
require holding both locks at once, as shown in
code above.
29Example of a Deadlock
- The program sharing_memory_deadlock.c uses two
mutexes, in the way shown above. - Executing it produced the following output
- exiting due to signal 14count_1 710 count_2
110 - Why are the counts lower than the previous
example?
30A Wait-for Cycle
31Summary on Deadlock
- If we use a lock for each resource, we might
encounter a deadlock - If there is a set of threads in which every
member is holding a lock and waiting for a lock
that is held by one of the other threads in the
same set, we have a deadlock. - There is no way any of the threads can be
unblocked. - We will address deadlock, its causes, and how to
prevent it later in more detail later. - What good reason might there be for using more
than one lock?
32Concepts Illustrated by sharing_memory_ordered.c
- See sharing_memory_ordered.c for an example of
how to avoid deadlock with two mutexes. - Nontrivial deadlock requires holding one lock
while requesting another, and doing this in a
different order in different threads - Deadlock can be prevented by consistently
ordering lock requests - When a thread locks more than one resource, it
must lock them in a specified order, that is
followed by all threads
33Digression Coroutines