Synchronization - PowerPoint PPT Presentation

1 / 79
About This Presentation
Title:

Synchronization

Description:

Every process checks the other process inside flag after setting its own. ... people are waiting, ticket order determines order in which they can make purchases ... – PowerPoint PPT presentation

Number of Views:35
Avg rating:3.0/5.0
Slides: 80
Provided by: einarv4
Category:

less

Transcript and Presenter's Notes

Title: Synchronization


1
Synchronization
  • ..or the trickiest bit of this course.

2
Threads share global memory
  • When a process contains multiple threads, they
    have
  • Private registers and stack memory (the context
    switching mechanism needs to save and restore
    registers when switching from thread to thread)
  • Shared access to the remainder of the process
    state

3
Two threads, one counter
  • Popular web server
  • Uses multiple threads to speed things up.
  • Simple shared state error
  • each thread increments a shared counter to track
    number of hits
  • What happens when two threads execute
    concurrently?

hits hits 1
4
Shared counters
  • Possible result lost update!
  • One other possible result everything works.
  • ? Difficult to debug
  • Called a race condition

hits 0
T1
time
read hits (0)
read hits (0)
hits 0 1
hits 0 1
hits 1
5
Race conditions
  • Def a timing dependent error involving shared
    state
  • Whether it happens depends on how threads
    scheduled
  • In effect, once thread A starts doing something,
    it needs to race to finish it because if thread
    B looks at the shared memory region before A is
    done, As change will be lost.
  • Hard to detect
  • All possible schedules have to be safe
  • Number of possible schedule permutations is huge
  • Some bad schedules? Some that will work
    sometimes?
  • they are intermittent
  • Timing dependent small changes can hide bug

6
Scheduler assumptions
  • If i is shared, and initialized to 0
  • Who wins?
  • Is it guaranteed that someone wins?
  • What if both threads run on identical speed CPU
  • executing in parallel

Process a while(i lt 10) i i 1
print A won!
Process b while(i gt -10) i i - 1
print B won!
7
Scheduler Assumptions
  • Normally we assume that
  • A scheduler always gives every executable thread
    opportunities to run
  • In effect, each thread makes finite progress
  • But schedulers arent always fair
  • Some threads may get more chances than others
  • To reason about worst case behavior we sometimes
    think of the scheduler as an adversary trying to
    mess up the algorithm

8
Critical Section Goals
  • Threads do some stuff but eventually might try to
    access shared data

T1
time
CSEnter() Critical section CSExit()
CSEnter() Critical section CSExit()
T1
9
Critical Section Goals
  • Perhaps they loop (perhaps not!)

T1
CSEnter() Critical section CSExit()
CSEnter() Critical section CSExit()
T1
10
Critical Section Goals
  • We would like
  • Safety No more than one thread can be in a
    critical section at any time.
  • Liveness A thread that is seeking to enter the
    critical section will eventually succeed
  • Fairness If two threads are both trying to enter
    a critical section, they have equal chances of
    success
  • in practice, fairness is rarely guaranteed

11
Solving the problem
  • A first idea
  • Have a boolean flag, inside. Initially false.
  • CSExit()
  • inside false

Code is unsafe thread 0 could finish the while
test when inside is false, but then 1 might call
CSEnter() before 0 can set inside to true!
  • CSEnter()
  • while(inside) continue
  • inside true
  • Now ask
  • Is this Safe? Live? Fair?

12
Solving the problem Take 2
  • A different idea (assumes just two threads)
  • Have a boolean flag, insidei. Initially false.
  • CSExit(int i)
  • Insidei false

Code isnt live with bad luck, both threads
could be looping, with 0 looking at 1, and 1
looking at 0
  • CSEnter(int i)
  • insidei true
  • while(insideJ) continue
  • Now ask
  • Is this Safe? Live? Fair?

13
Solving the problem Take 3
  • How about introducing a turn variable?
  • CSExit(int i)
  • turn J

Code isnt live thread 1 cant enter unless
thread 0 did first, and vice-versa. But perhaps
one thread needs to enter many times and the
other fewer times, or not at all
  • CSEnter(int i)
  • while(turn ! i) continue
  • Now ask
  • Is this Safe? Live? Fair?

14
Dekkers Algorithm (1965)
CSEnter(int i) insidei
true while(insideJ) if (turn J)
insidei false while(turn J)
continue insidei true
  • CSExit(int i)
  • turn J
  • insidei false

15
Napkin analysis of Dekkers algorithm
  • Safety No process will enter its CS without
    setting its inside flag. Every process checks the
    other process inside flag after setting its own.
    If both are set, the turn variable is used to
    allow only one process to proceed.
  • Liveness The turn variable is only considered
    when both processes are using, or trying to use,
    the resource
  • Fairness The turn variable ensures alternate
    access to the resource when both are competing
    for access

16
Petersons Algorithm (1981)
CSEnter(int i) insidei true turn
J while(insideJ turn J) continue
  • CSExit(int i)
  • insidei false
  • Simple is good!!

17
Napkin analysis of Petersons algorithm
  • Safety (by contradiction)
  • Assume that both processes (Alan and Shay) are in
    their critical section (and thus have their
    inside flags set). Since only one, say Alan, can
    have the turn, the other (Shay) must have reached
    the while() test before Alan set his inside flag.
  • However, after setting his inside flag, Alan gave
    away the turn to Shay. Shay has already changed
    the turn and cannot change it again,
    contradicting our assumption.

Liveness Fairness gt the turn variable.
18
Can we generalize to many threads?
  • Obvious approach wont work
  • Issue Whos turn next?
  • CSEnter(int i)
  • insidei true
  • for(J 0 J lt N J)
  • while(insideJ turn J)
  • continue
  • CSExit(int i)
  • insidei false

19
Bakery concept
  • Think of a popular store with a crowded counter,
    perhaps the pastry shop in Montreals fancy
    market
  • People take a ticket from a machine
  • If nobody is waiting, tickets dont matter
  • When several people are waiting, ticket order
    determines order in which they can make purchases

20
Bakery Algorithm Take 1
  • int ticketn
  • int next_ticket
  • CSEnter(int i)
  • ticketi next_ticket
  • for(J 0 J lt N J)
  • while(ticketJ ticketJ lt ticketi)
  • continue
  • CSExit(int i)
  • ticketi 0
  • Oops access to next_ticket is a problem!

21
Bakery Algorithm Take 2
  • int ticketn
  • CSEnter(int i)
  • ticketi max(ticket0, ticketN-1)1
  • for(J 0 J lt N J)
  • while(ticketJ ticketj lt ticketi)
  • continue
  • CSExit(int i)
  • ticketi 0

Just add 1 to the max!
  • Oops two could pick the same value!

22
Bakery Algorithm Take 3
  • If i, j pick same ticket value, ids break tie
  • (ticketJ lt ticketi) (ticketJticketi
    Jlti)
  • Notation (B,J) lt (A,i) to simplify the code
  • (BltA (BA Jlti)), e.g.
  • (ticketJ,J) lt (ticketi,i)

23
Bakery Algorithm Take 4
  • int ticketN
  • CSExit(int i)
  • ticketi 0

CSEnter(int i) ticketi max(ticket0,
ticketN-1)1 for(J 0 J lt N J)
while(ticketJ (ticketJ,J) lt
(ticketi,i)) continue
  • Oops i could look at J when J is still storing
    its ticket, and J could have a lower id than me.

24
Bakery Algorithm Almost final
  • int ticketN
  • boolean choosingN false
  • CSExit(int i)
  • ticketi 0

CSEnter(int i) choosingi true ticketi
max(ticket0, ticketN-1)1 choosingi
false for(J 0 J lt N J) while(choosingJ)
continue while(ticketJ (ticketJ,J) lt
(ticketi,i)) continue
25
Bakery Algorithm Issues?
  • What if we dont know how many threads might be
    running?
  • The algorithm depends on having an agreed upon
    value for N
  • Somehow would need a way to adjust N when a
    thread is created or one goes away
  • Also, technically speaking, ticket can overflow!
  • Solution Change code so that if ticket is too
    big, set it back to zero and try again.

26
Bakery Algorithm Final
  • int ticketN / Important Disable thread
    scheduling when changing N /
  • boolean choosingN false
  • CSExit(int i)
  • ticketi 0

CSEnter(int i) do ticketi 0
choosingi true ticketi
max(ticket0, ticketN-1)1 choosingi
false while(ticketi gt MAXIMUM) for(J
0 J lt N J) while(choosingJ)
continue while(ticketJ (ticketJ,J) lt
(ticketi,i)) continue
27
How do real systems do it?
  • Some real systems actually use algorithms such as
    the bakery algorithm
  • A good choice where busy-waiting isnt going to
    be super-inefficient
  • For example, if you have enough CPUs so each
    thread has a CPU of its own
  • Some systems disable interrupts briefly when
    calling CSEnter and CSExit
  • Some use hardware help atomic instructions

28
Critical Sections with Atomic Hardware Primitives
Share int lock Initialize lock false
  • Process i
  • While(test_and_set(lock))
  • Critical Section
  • lock false

Assumes that test_and_set is compiled to a
special hardware instruction that sets the lock
and returns the OLD value (true locked false
unlocked)
Problem Does not satisfy liveness (bounded
waiting) (see book for correct solution)
29
Presenting critical sections to users
  • CSEnter and CSExit are possibilities
  • But more commonly, operating systems have offered
    a kind of locking primitive
  • We call these semaphores

30
Semaphores
  • Non-negative integer with atomic increment and
    decrement
  • Integer S that (besides init) can only be
    modified by
  • P(S) or S.wait() decrement or block if already 0
  • V(S) or S.signal() increment and wake up process
    if any
  • These operations are atomic

These systems use the operation signal() instead
of V()
Some systems use the operation wait() instead of
P()
semaphore S P(S) while(S 0)
S--
V(S) S
31
Semaphores
  • Non-negative integer with atomic increment and
    decrement
  • Integer S that (besides init) can only be
    modified by
  • P(S) or S.wait() decrement or block if already 0
  • V(S) or S.signal() increment and wake up process
    if any
  • Can also be expressed in terms of queues

semaphore S P(S) if (S 0) stop thread,
enqueue on wait list, run something else S--
V(S) S if(wait-list isnt empty)
dequeue and start one process
32
Summary Implementing Semaphores
  • Can use
  • Multithread synchronization algorithms shown
    earlier
  • Could have a thread disable interrupts, put
    itself on a wait queue, then context switch to
    some other thread (an idle thread if needed)
  • The O/S designer makes these decisions and the
    end user shouldnt need to know

33
Semaphore Types
  • Counting Semaphores
  • Any integer
  • Used for synchronization
  • Binary Semaphores
  • Value is limited to 0 or 1
  • Used for mutual exclusion (mutex)

Process i P(S) Critical Section V(S)
Shared semaphore S Init S 1
34
Classical Synchronization Problems
35
Paradigms for Threads to Share Data
  • Weve looked at critical sections
  • Really, a form of locking
  • When one thread will access shared data, first it
    gets a kind of lock
  • This prevents other threads from accessing that
    data until the first one has finished
  • We saw that semaphores make it easy to implement
    critical sections

36
Reminder Critical Section
  • Classic notation due to Dijkstra
  • Semaphore mutex 1
  • CSEnter() P(mutex)
  • CSExit() V(mutex)
  • Other notation (more familiar in Java)
  • CSEnter() mutex.wait()
  • CSExit() mutex.signal()

37
Bounded Buffer
  • This style of shared access doesnt capture two
    very common models of sharing that we would also
    like to support
  • Bounded buffer
  • Arises when two or more threads communicate with
    some threads producing data that others
    consume.
  • Example preprocessor for a compiler produces a
    preprocessed source file that the parser of the
    compiler consumes

38
Readers and Writers
  • In this model, threads share data that some
    threads read and other threads write.
  • Instead of CSEnter and CSExit we want
  • StartReadEndRead StartWriteEndWrite
  • Goal allow multiple concurrent readers but only
    a single writer at a time, and if a writer is
    active, readers wait for it to finish

39
Producer-Consumer Problem
  • Start by imagining an unbounded (infinite) buffer
  • Producer process writes data to buffer
  • Writes to In and moves rightwards
  • Consumer process reads data from buffer
  • Reads from Out and moves rightwards
  • Should not try to consume if there is no data

Out
In
Need an infinite buffer
40
Producer-Consumer Problem
  • Bounded buffer size N
  • Access entry 0 N-1, then wrap around to 0
    again
  • Producer process writes data to buffer
  • Must not write more than N items more than
    consumer ate
  • Consumer process reads data from buffer
  • Should not try to consume if there is no data

0
1
N-1
In
Out
41
Producer-Consumer Problem
  • A number of applications
  • Data from bar-code reader consumed by device
    driver
  • Data in a file you want to print consumed by
    printer spooler, which produces data consumed by
    line printer device driver
  • Web server produces data consumed by clients web
    browser
  • Example so-called pipe ( ) in Unix
  • gt cat file sort uniq more
  • gt prog sort
  • Thought questions wheres the bounded buffer?
  • How big should the buffer be, in an ideal
    world?

42
Producer-Consumer Problem
  • Solving with semaphores
  • Well use two kinds of semaphores
  • Well use counters to track how much data is in
    the buffer
  • One counter counts as we add data and stops the
    producer if there are N objects in the buffer
  • A second counter counts as we remove data and
    stops a consumer if there are 0 in the buffer
  • Idea since general semaphores can count for us,
    we dont need a separate counter variable
  • Why do we need a second kind of semaphore?
  • Well also need a mutex semaphore

43
Producer-Consumer Problem
Shared Semaphores mutex, empty, full Init
mutex 1 / for mutual exclusion/
empty N / number empty buf entries /
full 0 / number full buf entries /
Producer do . . . // produce an item
in nextp . . . P(empty) P(mutex)
. . . // add nextp to buffer . . .
V(mutex) V(full) while (true)
Consumer do P(full) P(mutex) . .
. // remove item to nextc . . .
V(mutex) V(empty) . . . //
consume item in nextc . . . while (true)
44
Readers-Writers Problem
  • Courtois et al 1971
  • Models access to a database
  • A reader is a thread that needs to look at the
    database but wont change it.
  • A writer is a thread that modifies the database
  • Example making an airline reservation
  • When you browse to look at flight schedules the
    web site is acting as a reader on your behalf
  • When you reserve a seat, the web site has to
    write into the database to make the reservation

45
Readers-Writers Problem
  • Many threads share an object in memory
  • Some write to it, some only read it
  • Only one writer can be active at a time
  • Any number of readers can be active
    simultaneously
  • Key insight generalizes the critical section
    concept
  • One issue we need to settle, to clarify problem
    statement.
  • Suppose that a writer is active and a mixture of
    readers and writers now shows up. Who should get
    in next?
  • Or suppose that a writer is waiting and an
    endless of stream of readers keeps showing up.
    Is it fair for them to become active?
  • Well favor a kind of back-and-forth form of
    fairness
  • Once a reader is waiting, readers will get in
    next.
  • If a writer is waiting, one writer will get in
    next.

46
Readers-Writers (Take 1)
Shared variables Semaphore mutex, wrl
integer rcount Init mutex
1, wrl 1, rcount 0 Writer do
P(wrl) . . . /writing is performed/
. . . V(wrl) while(TRUE)
Reader do P(mutex) rcount if
(rcount 1) P(wrl) V(mutex) .
. . /reading is performed/ . . .
P(mutex) rcount-- if (rcount 0)
V(wrl) V(mutex) while(TRUE)
47
Readers-Writers Notes
  • If there is a writer
  • First reader blocks on wrl
  • Other readers block on mutex
  • Once a reader is active, all readers get to go
    through
  • Which reader gets in first?
  • The last reader to exit signals a writer
  • If no writer, then readers can continue
  • If readers and writers waiting on wrl, and writer
    exits
  • Who gets to go in first?
  • Why doesnt a writer need to use mutex?

48
Does this work as we hoped?
  • If readers are active, no writer can enter
  • The writers wait doing a P(wrl)
  • While writer is active, nobody can enter
  • Any other reader or writer will wait
  • But back-and-forth switching is buggy
  • Any number of readers can enter in a row
  • Readers can starve writers
  • With semaphores, building a solution that has the
    desired back-and-forth behavior is really, really
    tricky!
  • We recommend that you try, but not too hard

49
Common programming errors
Whoever next calls P() will freeze up. The bug
might be confusing because that other process
could be perfectly correct code, yet thats the
one youll see hung when you use the debugger to
look at its state!
A typo. Process J wont respect mutual exclusion
even if the other processes follow the rules
correctly. Worse still, once weve done two
extra V() operations this way, other processes
might get into the CS inappropriately!
A typo. Process I will get stuck (forever) the
second time it does the P() operation. Moreover,
every other process will freeze up too when
trying to enter the critical section!
Process i P(S) CS P(S)
Process j V(S) CS V(S)
Process k P(S) CS
50
More common mistakes
  • Conditional code that can break the
    normaltop-to-bottom flow of codein the critical
    section
  • Often a result of someonetrying to maintain
    aprogram, e.g. to fix a bugor add functionality
    in codewritten by someone else

P(S) if(something or other) return CS V(S)
51
Whats wrong?
Shared Semaphores mutex, empty, full Init
mutex 1 / for mutual exclusion/
empty N / number empty bufs / full
0 / number full bufs /
Producer do . . . // produce an item
in nextp . . . P(mutex) P(empty)
. . . // add nextp to buffer . . .
V(mutex) V(full) while (true)
Consumer do P(full) P(mutex) . .
. // remove item to nextc . . .
V(mutex) V(empty) . . . //
consume item in nextc . . . while (true)
Oops! Even if you do the correct operations, the
order in which you do semaphore operations can
have an incredible impact on correctness
What if buffer is full?
52
Language Support for Concurrency
53
Revisiting semaphores!
  • Semaphores are very low-level primitives
  • Users could easily make small errors
  • Similar to programming in assembly language
  • Small error brings system to grinding halt
  • Very difficult to debug
  • Also, we seem to be using them in two ways
  • For mutual exclusion, the real abstraction is a
    critical section
  • But the bounded buffer example illustrates
    something different, where threads communicate
    using semaphores
  • Simplification Provide concurrency support in
    compiler
  • Monitors

54
Monitors
  • Hoare 1974
  • Abstract Data Type for handling/defining shared
    resources
  • Comprises
  • Shared Private Data
  • The resource
  • Cannot be accessed from outside
  • Procedures that operate on the data
  • Gateway to the resource
  • Can only act on data local to the monitor
  • Synchronization primitives
  • Among threads that access the procedures

55
Monitor Semantics
  • Monitors guarantee mutual exclusion
  • Only one thread can execute monitor procedure at
    any time
  • in the monitor
  • If second thread invokes monitor procedure at
    that time
  • It will block and wait for entry to the monitor
  • ? Need for a wait queue
  • If thread within a monitor blocks, another can
    enter
  • Effect on parallelism?

56
Structure of a Monitor
Monitor monitor_name // shared variable
declarations procedure P1(. . . .)
. . . . procedure P2(. . .
.) . . . . . .
procedure PN(. . . .) . . . .
initialization_code(. . . .)
. . . .
For example Monitor stack int top
void push(any_t ) . . . .
any_t pop() . . . .
initialization_code() . .
. . only one instance of stack can be
modified at a time
57
Synchronization Using Monitors
  • Defines Condition Variables
  • condition x
  • Provides a mechanism to wait for events
  • Resources available, any writers
  • 3 atomic operations on Condition Variables
  • x.wait() release monitor lock, sleep until woken
    up
  • ? condition variables have waiting queues too
  • x.notify() wake one process waiting on condition
    (if there is one)
  • No history associated with signal
  • x.broadcast() wake all processes waiting on
    condition
  • Useful for resource manager
  • Condition variables are not Boolean
  • If(x) then does not make sense

58
Producer Consumer using Monitors
Monitor Producer_Consumer any_t bufN
int n 0, tail 0, head 0 condition
not_empty, not_full void put(char ch)
if(n N) wait(not_full) bufheadN
ch head n
signal(not_empty) char get() if(n
0) wait(not_empty) ch
buftailN tail n-- signal(not_full)
return ch
What if no thread is waiting when signal is
called?
Signal is a no-op if nobodyis waiting. This
is very differentfrom what happens when you
callV() on a semaphore semaphoreshave a
memory of how many times V() was called!
59
Types of wait queues
  • Monitors have several kinds of wait queues
  • Condition variable has a queue of threads
    waiting on the associated condition
  • Thread goes to the end of the queue
  • Entry to the monitor has a queue of threads
    waiting to obtain mutual exclusion so they can
    enter
  • Again, a new arrival goes to the end of the queue
  • So-called urgent queue threads that were just
    woken up using signal().
  • New arrival normally goes to the front of this
    queue

60
Producer Consumer using Monitors
Monitor Producer_Consumer condition
not_full / other vars / condition
not_empty void put(char ch)
wait(not_full) . . . signal(not_empty)
char get() . . .
61
Condition Variables Semaphores
  • Condition Variables ! semaphores
  • Access to monitor is controlled by a lock
  • Wait blocks on thread and gives up the lock
  • To call wait, thread has to be in monitor, hence
    the lock
  • Semaphore P() blocks thread only if value less
    than 0
  • Signal causes waiting thread to wake up
  • If there is no waiting thread, the signal is lost
  • V() increments value, so future threads need not
    wait on P()
  • Condition variables have no history
  • However they can be used to implement each other

62
Language Support
  • Can be embedded in programming language
  • Synchronization code added by compiler, enforced
    at runtime
  • Mesa/Cedar from Xerox PARC
  • Java synchronized, wait, notify, notifyall
  • C lock, wait (with timeouts) , pulse, pulseall
  • Monitors easier and safer than semaphores
  • Compiler can check, lock implicit (cannot be
    forgotten)
  • Why not put everything in the monitor?

63
Monitor Solutions to Classical Problems
64
Producer Consumer using Monitors
Monitor Producer_Consumer any_t bufN
int n 0, tail 0, head 0 condition
not_empty, not_full void put(char ch)
if(n N) wait(not_full) bufheadN
ch head n
signal(not_empty)
char get() if(n 0)
wait(not_empty) ch buftailN tail
n-- signal(not_full) return ch
65
Reminders Subtle aspects
  • Notice that when a thread calls wait(), if it
    blocks it also automatically releases the
    monitors mutual exclusion lock
  • This is an elegant solution to an issue seen with
    semaphores
  • Caller has mutual exclusion and wants to call
    P(not_empty) but this call might block
  • If we just do the call, the solution deadlocks
  • But if we first call V(mutex), we get a race
    condition!

66
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
67
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
68
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
69
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
70
Understanding the Solution
  • A writer can enter if there are no other active
    writers and no readers are waiting

71
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
72
Understanding the Solution
  • A reader can enter if
  • There are no writers active or waiting
  • So we can have many readers active all at once
  • Otherwise, a reader waits (maybe many do)

73
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
74
Understanding the Solution
  • When a writer finishes, it checks to see if any
    readers are waiting
  • If so, it lets one of them enter
  • That one will let the next one enter, etc
  • Similarly, when a reader finishes, if it was the
    last reader, it lets a writer in (if any is there)

75
Readers and Writers
Monitor ReadersNWriters int WaitingWriters,
WaitingReaders,NReaders, NWriters Condition
CanRead, CanWrite Void BeginWrite()
if(NWriters 1 NReaders gt 0)
WaitingWriters
wait(CanWrite) --WaitingWriters
NWriters 1 Void
EndWrite() NWriters 0
if(WaitingReaders)
Signal(CanRead) else
Signal(CanWrite)
Void BeginRead() if(NWriters 1
WaitingWriters gt 0)
WaitingReaders Wait(CanRead) --W
aitingReaders NReaders
Signal(CanRead) Void EndRead()
if(--NReaders 0)
Signal(CanWrite)
76
Understanding the Solution
  • It wants to be fair
  • If a writer is waiting, readers queue up
  • If a reader (or another writer) is active or
    waiting, writers queue up
  • this is mostly fair, although once it lets a
    reader in, it lets ALL waiting readers in all at
    once, even if some showed up after other
    waiting writers

77
Subtle aspects?
  • The code is simplified because we know there
    can only be one writer at a time
  • It also takes advantage of the fact that signal
    is a no-op if nobody is waiting
  • Where do we see these ideas used?
  • In the EndWrite code (it signals CanWrite
    without checking for waiting writers)
  • In the EndRead code (same thing)
  • In StartRead (signals CanRead at the end)

78
Comparison with Semaphores
  • With semaphores we never did have a fair
    solution of this sort
  • In fact it can be done, but the code is quite
    tricky
  • Here the straightforward solution works in the
    desired way!
  • Monitors are less error-prone and also easier to
    understand
  • C and Java primitives should typically be used
    in this manner, too

79
To conclude
  • Race conditions are a pain!
  • We studied several ways to handle them
  • Each has its own pros and cons
  • Support in Java, C has simplified writing
    multithreaded applications
  • Some new program analysis tools automate checking
    to make sure your code is using synchronization
    correctly
  • The hard part for these is to figure out what
    correct means!
  • None of these tools would make sense of the
    bounded buffer (those in the business sometimes
    call it the unbounded bugger)
Write a Comment
User Comments (0)
About PowerShow.com