Open Distributed Systems - Lecture 5 - PowerPoint PPT Presentation

About This Presentation
Title:

Open Distributed Systems - Lecture 5

Description:

signal(c) - Resume execution of thread at the head of c. ... the thread must wait to reacquire the monitor before resuming execution. ... – PowerPoint PPT presentation

Number of Views:31
Avg rating:3.0/5.0
Slides: 72
Provided by: markb215
Category:

less

Transcript and Presenter's Notes

Title: Open Distributed Systems - Lecture 5


1
Open Distributed Systems - Lecture 5
  • Mark Baker
  • School of Computer Science
  • University of Portsmouth
  • Tel 44 (1705) 845826
  • Email Mark.Baker_at_computer.org
  • URL http/www.dcs.port.ac.uk/mab/

2
Contents - Concurrency
  • Semaphores and Shared Memory.
  • Process Co-operation.
  • Mutual Exclusion
  • The Ornamental Gardens Problem.
  • Critical Sections.
  • Producer - Consumer Problem.
  • Dining Philosophers.
  • Locking/optimistic concurrency/Timestamps.

3
Semaphores and Shared Memory
  • Semaphores are not used to exchange data between
    processes instead they are used to synchronize
    two or more processes.
  • They prevent two processes from simultaneously
    accessing a shared resource.
  • Consider a semaphore as an integer value variable
    that is a resource counter.

4
Semaphores and Shared Memory
  • The value of the semaphore is a number to
    indicate whether the resources are available or
    not.
  • A semaphore has only two values 0 and 1.
  • Since the use of semaphores is to provide shared
    resources synchronisation, the semaphore value
    must be stored in the kernel as shown in the next
    slide.

5
Semaphores and Shared Memory
6
Process Co-operation
  • In single CPU systems, critical regions, mutual
    exclusions, and other synchronisation problems
    are generally solved using methods such as
    semaphores and monitors.
  • These methods are not well suited to use in
    distributed systems because they generally rely
    on the existence of shared memory.

7
Process Co-operation
  • Two processes that are interacting using
    semaphores must both be able to share the
    semaphore by having it stored in the kernel, and
    then execute a system call to access it.
  • If, however, the processes are running on
    different machines, this method no longer works,
    and other techniques are needed.
  • In distributed systems it not even easy to
    determine whether one event occurred before
    another...

8
Mutual Exclusion
  • Systems involving multiple processes are often
    most easily programmed using critical regions.
  • When a process has to read or update certain
    shared data structures, it first enters a
    critical region to achieve mutual exclusion and
    ensure that no other process will use the shared
    data structures at the same time.
  • In single-processor systems, critical regions are
    protected using semaphores, monitors and similar
    constructs.
  • We will look at a few examples of how critical
    regions and mutual exclusion can be implemented
    in distributed systems.

9
Mutual Exclusion
  • Now consider the problem of threads working
    (processing and updating) some data that is
    shared.
  • In particular, think about the problem of
    interference and its solution using mutually
    exclusive access.

10
The Ornamental Gardens Problem
  • A large Ornamental Garden is open to members of
    the public who can enter through either one of
    two turnstiles.
  • The management want to determine how many people
    there are in the garden at any one time.
  • They require a computer system to provide this
    information.

11
The Ornamental Gardens problem
  • We will represent each turnstile by a thread and
    provide an object to store the count.
  • Each turnstile displays the number of people it
    has let into the park
  • The concurrent program consists of two concurrent
    threads and a shared counter object.

12
Ornamental garden Program
The Turnstile thread simulates the periodic
arrival of a visitor to the garden every second
by sleeping for a second and then invoking the
increment() method of the counter object.
13
Ornamental Garden - pseudo code
  • main()
  • int counter
  • Thread turnstile() // Thread Def.
  • for(int i 0 i lt 20 i)
  • counter counter 1
  • turnstile turnstileE, turnstileW // Thread
    Dec.
  • turnstileW.start()
  • turnstileE.start()

14
Ornamental Garden - Count Object
  • class Counter
  • int value_ 0
  • public void increment()
  • int temp value_ //read
  • Simulate.interrupt()
  • temp //add1
  • value_ temp //write
  • The simulated interrupt simply calls yield()
    randomly to force a thread switch - this is to
    make sure the demonstration works on all JVM.

15
The Ornamental Gardens problem
  • In this example program, each turnstile is
    represented by a thread that updates a shared
    counter object.
  • This version does not work correctly as can be
    observed, increments to the counter are lost so
    that the total number of people in the garden is
    not the sum of the people who entered through the
    turnstiles.

16
Ornamental garden program - display
After the East and West turnstile threads have
each incremented its counter 21 times, the garden
people counter is not the sum of the counts
displayed. Counter increments have been lost.
Why?
17
Interference
  • The problem in the Ornamental Gardens program is
    that we have permitted the increment action by
    the West Turnstile to overlap with the increment
    action of East.
  • GARDEN EAST WEST
  • where
  • EAST (increment ? EAST)
  • WEST (increment ? WEST)
  • and
  • increment (read?add1?write)

18
concurrent method activation
Java method activations are not atomic - thread
objects east and west may be executing the code
for the increment method at the same time.
west
east
shared code
increment read value write value 1
program counter
program counter
19
Ornamental garden Model
Process VAR models read and write access to the
shared counter value. Increment is modelled
inside TURNSTILE since Java method activations
are not atomic i.e. thread objects east and west
may interleave their read and write actions.
20
Interference
  • Consequently the following is a legal execution
    trace
  • Wread ? Eread ? Wadd1 ? Wwrite ? Eadd1 ? Ewrite
  • It can easily be seen that this results in losing
    an increment on counter.
  • For correctness, we require that the increment
    actions do not overlap.
  • forall e (Winc ? Einc) or (Einc ? Winc)

21
Interference and Mutual Exclusion
Destructive update, caused by the arbitrary
interleaving of read and write actions, is termed
interference.
Interference bugs are extremely difficult to
locate. The general solution is to give methods
mutually exclusive access to shared objects.
Mutual exclusion can be modeled as atomic actions.
22
Critical Sections
  • We can avoid the problem of interference by
    making actions indivisible or atomic.
  • The piece of code that must appear (from the
    point of view of some other process) as an atomic
    action and is called a Critical Section (CS).
  • If processes P and Q both contain critical
    sections whose overlapped executions could
    interfere with one another, then we need to
    ensure these sections are executed under mutual
    exclusion.

23
Critical Sections
  • forall e (CSiP ? CSjQ) or (CSjQ ? CSiP)
  • For all pairs of processes P and Q and
    repetitions of their critical sections i and j.
  • We say that two operations are serialised when
    they may be executed in either order but without
    overlap.

24
Critical Sections in Java
class Counter int value_0 public
synchronized void increment() int
temp value_ //read Simulate.interrupt()
temp //add1 value_temp //write
  • A method can be made a critical section in Java
    by prefixing its definition with the keyword
    synchronized.
  • The corrected code for the Counter class becomes

25
Java Synchronized Statement
  • Java associates a lock with every object.
  • Consequently in the previous example the Java
    compiler inserts code to acquire the lock before
    executing the body of the synchronized method and
    code to release the lock before the method
    returns.
  • Access to an object may also be made mutually
    exclusive by using the synchronized statement.

26
Java Synchronized Statement
  • For example, an alternative (but less elegant)
    way to correct the example would be to modify the
    Turnstile.run() method

public void run() while(true)
synchronized(people_) people_.increment()

27
Mutual Exclusion - Summary
  • Concepts
  • Interference.
  • Critical section.
  • Mutual Exclusion.
  • Practice
  • Synchronized methods.
  • Synchronized statement.

28
Monitors
  • A monitor is a language construct which provides
    automatic mutual exclusion to the variables it
    encapsulates.
  • Variables may only be accessed via monitor access
    procedures which are critical sections.
  • Consequently, only a single thread may be
    executing inside a monitor at any one time.

29
Monitors in Java
  • Any object may be a monitor in Java by making all
    its methods synchronized and its data attributes
    private or protected.
  • Consequently, the data encapsulated by the object
    can only be accessed via its methods.
  • Because this access is synchronized, mutual
    exclusion is enforced.

30
Example Monitor in Java
class Counter private long value_ 0
public synchronized void inc() value_
public synchronized void dec() --value_
public synchronized long value()
return(value_)
31
Condition Synchronization
  • In addition to mutual exclusion, monitors need to
    handle the more general problem of inter-thread
    synchronization.
  • Hoare in his classic paper on monitors proposed
    condition variables which are simply FIFO queues
    of suspended threads.
  • Hoare C.A.R.(1974), Monitors an Operating
    System Structuring Concept, Communications of the
    ACM, 17(10), 549-57.

32
Condition Synchronization
  • wait(c) - Suspend execution of calling thread -
    place thread on condition queue(c).
  • signal(c) - Resume execution of thread at the
    head of c.
  • They are called condition variables (queues)
    since they are used to signal that some condition
    within the monitor holds, i.e. value_ 0 or
    buffer_empty.

33
Synchronization Mutual Exclusion
  • Although only one thread may be executing inside
    a monitor at a time, a wait operation - which
    suspends a thread inside the monitor - will allow
    another thread to enter the monitor.
  • Waiting threads effectively exit the monitor.
  • If this was not the case and a suspended thread
    retained the monitor lock then no other thread
    could enter the monitor - the waiting thread
    could never be resumed.

34
Synchronization Mutual Exclusion
35
Condition Synchronization in Java
  • Java provides only one condition queue per
    monitor .
  • The following methods are provided by class
    Object (the base class).
  • public final void notify() - Wakes up a single
    thread that is waiting on this object's monitor
    queue.
  • public final void notifyAll() - Wakes up all
    threads that are waiting on this object's monitor
    queue.

36
Condition synchronization in Java
  • public final void wait() throws
    InterruptedException - Waits to be notified by
    another thread, when notified, the thread must
    wait to reacquire the monitor before resuming
    execution.
  • These operations fail if called by a thread which
    does not currently own the monitor, i.e. has
    acquired the monitor lock by executing a
    synchronized method or statement.

37
Semaphores
  • Semaphores are the synchronization datatype
    defined by Dijkstra - Dijkstra E.W. (1968)
    Co-operating Sequential Processes. In Programming
    Languages, Academic Press.
  • It is based on an integer variable to count the
    wakeups saved for future use.
  • A semaphore could have a value 0, indicating no
    wakeups were saved, or some positive value if one
    or more wakeups are pending.

38
Semaphores
  • Dijkstra proposed having two operations, DOWN and
    UP - generalisations of SLEEP and WAKEUP.
  • WAIT if the value of the semaphore is gt 0 then
    decrement it and allow the process to continue,
    else suspend the process (noting that it is
    blocked on this semaphore)
  • SIGNAL if there are no processes waiting on the
    semaphore then increment it else free one
    process, which continues at the instruction after
    its WAIT instruction.
  • Semaphore S
  • S.down() // - when Sgt0 decrement S
  • S.up() // - increment S

39
Signalling using Semaphores
  • The demonstration program has two threads Thread
    A signals the semaphore using up() and Thread B
    waits for the semaphore using down().

class Signaller implements Runnable
Semaphore sema_ Signaller(Semaphore s)
sema_ s public void run()
while(true) for(int i0ilt60 i)
DisplayThread.rotate()
sema_.up()
class Waiter implements Runnable Semaphore
sema_ Waiter(Semaphore s) sema_ s public
void run() while(true)
sema_.down() for(int i0ilt10 i)
DisplayThread.rotate()
40
Producer - Consumer Problem
  • As an example of how these primitives can be
    used, consider the Producer-Consumer Problem
    (bounded-box problem).
  • Two processes share a common, fixed-sized buffer.
  • One of them, the producer puts information into
    the buffer, and the other, the consumer, takes it
    out.
  • Trouble arises when the producer wants to put a
    new item in the buffer, but it is already full.

41
Producer - Consumer Problem
  • The solution is to put the producer to sleep, to
    be awaken when the consumer has removed one of
    more items.
  • Similarly, if the consumer want to remove an item
    from the buffer and see that the buffer is empty,
    it goes to sleep until the producer puts
    something in the buffer and wakes it up.

42
Producer - Consumer Problem
43
Correctness of Concurrent Programs
  • A concurrent program must satisfy two classes of
    property safety and liveness.
  • Safety Properties Assert that nothing bad will
    ever happen during an execution (that is, that
    the program will never enter a bad state).
  • Liveness properties Assert that something good
    will eventually happen during execution.

44
Correctness of Concurrent Programs
  • Liveness is concerned with making progress in a
    program - situations which prevent progress are
    livelock and starvation.
  • Deadlock can be considered to impact both safety
    and liveness.
  • Liveness properties are in general more difficult
    to prove than safety properties since they are
    dependent on the scheduling properties being used
    and require reasoning about infinite sequences of
    actions.

45
Readers - Writers
  • Through synchronize , Java provides exclusive
    locking.
  • However, in many programs it will be correct for
    a number of threads which do not modify a shared
    resource to access that resource concurrently
    (Readers).
  • Threads which update the state of the resource
    (Writers) will still require exclusive access.

46
Dining Philosophers
  • Dining Philosophers Problem Five philosophers
    sit around a circular table.
  • Each philosopher spends his life alternatively
    thinking and eating.
  • In the centre of the table is a large plate of
    noodles.
  • A philosopher needs two chopsticks to eat a
    helping of noodles.

47
Dining Philosophers
  • Unfortunately, as philosophy is not as well paid
    as computing, the philosophers can only afford
    five chopsticks.
  • One chopstick is placed between each pair of
    philosophers and they agree that each will only
    use the chopstick to his immediate right and left.

48
Dining Philosophers
49
Dining Philosophers
50
Dining Philosophers - Specification
  • DINERS PHIL0 ... PHIL5 CHOP0 ...
    CHOP5
  • PHILi (i.think
  • ?i.get.i ? i.get.(i1)
  • ?i.eat
  • ?i.put.I ? i.put.(i1)
  • ?PHILi)
  • CHOPi (( i.get.i?i.put.i (i?1).get.i
    ?(i?1).put.I ) ? CHOPi )
  • where
  • i1 (i5-1)5
  • i?1 (i1)5

51
Philosopher implementation
  • Each philosopher is a process with its own thread
    of control.
  • CHOP is implemented by the monitor class
    Chopstick - the choice in CHOP is provided by the
    get()operation which allocates the chopstick to
    the first philosopher to request it and blocks a
    neighbours request until the fork is released by
    put().

52
Philosopher implementation
  • The notion of first is realised by the
    synchronization lock which ensures that only one
    thread can be executing in get() at any one time.
  • What happens when eating and thinking time is
    reduced in the demonstration program?

53
Deadlock
  • The Dining Philosophers program deadlocks when
    every philosopher has obtained his/her right
    chopstick.
  • No philosopher can obtain his/her left chopstick
    and so no progress can be made - the system is
    deadlocked.
  • A thread/process is said to be in a deadlock
    state if it is waiting for a condition that will
    never become true (e.g. left chopstick becoming
    free).

54
Deadlock
  • Note that the potential for deadlock exists
    independently of the thinking and eating times.
  • However, the probability of deadlock occurring
    increases as these times are reduced.

55
Deadlock necessary and sufficient conditions
  • Serially reusable resources the processes
    involved share resources which they use under
    mutual exclusion.
  • Incremental acquisition processes hold on to
    resources already allocated to them while waiting
    to acquire additional resources.
  • No pre-emption once acquired by a process,
    resources cannot be pre-empted (forcibly
    withdrawn) but are only released voluntarily.
  • Wait-for cycle a circular chain (or cycle) of
    processes exists such that each process holds a
    resource which its successor in the cycle is
    waiting to acquire.

56
Wait-for cycle
Has A awaits B
A
Has E awaits A
Has B awaits C
E
B
C
D
Has C awaits D
Has D awaits E
57
Deadlock
  • Deadlock can be avoided in the Dining
    Philosophers system by making one of the
    philosophers pick up his/her chopsticks in the
    reverse order (i.e. left before right).
  • Can you suggest alternatives strategies for
    avoiding deadlock in the dining philosophers
    program ?

PHIL0 (0.think --gt 0.get.4 --gt 0.get.0 --gt
0.eat --gt 0.put.4 --gt 0.put.0 --gt PHIL0)
58
Distributed Concurrency Control
  • We already covered simple case of single-system
    mutual exclusion.
  • Now consider concurrency control in a distributed
    system...
  • Techniques Used
  • Locking.
  • Optimistic Concurrency Control.
  • Timestamps.

59
Locking
  • The oldest and most widely used concurrency
    control algorithm is locking.
  • In its simplest form, when a process needs to
    read or write data as part of a transaction, it
    first locks the data.
  • Locking can be done using a single centralised
    lock manager, or with a local lock manager on
    each machine.

60
Locking
  • In both cases the lock manager maintains a list
    of locked files and rejects all attempts to lock
    files that are already locked by another process.
  • Since well-behaved processes do not attempt to
    access data before it has been locked, setting a
    lock on the data keeps everyone else away from it
    and ensure that it will not change during the
    lifetime of the transaction.

61
Locking
  • Locks are normally acquired and released by the
    transaction system and do not require the
    intervention of the programmer.
  • This system is very restrictive and can be
    improved greatly by distinguishing between
  • read and write locks.
  • Read locks make sure the data does not change,
    but does not exclude other processes reading the
    data.
  • Write locks locks are exclusive !

62
Locking
  • One issue with locking is the granularity of the
    lock - is it an entire file, record, memory page,
    object, data item
  • Obviously the finer the granularity, the more
    precise the lock and the greater the potential
    for parallelism.
  • On the other hand, fine granularity requires more
    locks, is more expensive to support and is more
    likely to lead to deadlocks.

63
Dead Locks
  • Locking techniques can lead to dead locks.
  • For example, if two processes acquire locks and
    then need to acquire each others lock before they
    can release their own, a state of dead lock will
    occur.
  • Dead lock can be overcome by some type of queuing
    scheme of by having timeouts and aborts on locks.

64
Conditions for Dead Locks
  • A resource request can be refused
  • The systems concurrency control policy is such
    that objects can be acquired for exclusive use or
    some specific use.
  • It is possible for a process to be refused access
    to an object on the grounds that some other
    process acquired it for exclusive use.
  • An example is that a process may request
    exclusive access to an object in order to write
    to it but is refused because the object is
    currently locked for shared access.

65
Conditions for Dead Locks
  • Hold while waiting
  • A process is allowed to hold objects while
    requesting further objects. The assumption is
    that the process will wait for the resource until
    it become available.
  • No preempton
  • Objects cannot be recovered from processes. A
    process may acquire an object use it and release
    it.
  • Circular wait
  • A cycle of processes exists such that each
    process holds an object that is being requested
    by the next process in the cycle and that request
    has been refused.

66
Optimistic Concurrency Control
  • The idea behind this technique is simple - just
    go ahead and do whatever you want to without
    paying attention to what anybody else is doing.
    If there is a problem worry about it later
  • In practice conflicts are rare, so most of the
    time it works OK.

67
Optimistic Concurrency Control
  • This technique manages conflicts by keeping track
    of which files have been read and written.
  • At the point of committing, it checks all other
    transactions to see if its files have been
    changed since the transaction started.
  • If it has, the transaction is aborted, if it has
    not the transaction is committed.

68
Optimistic Concurrency Control
  • The advantages of this technique are
  • deadlock free.
  • maximizes parallelism as there no locks.
  • The disadvantages are
  • The technique sometimes fails, in which case the
    transaction need to be re-run.
  • Under conditions of heavy load, the probability
    of failure may go up substantially, making this
    technique a poor choice.

69
Timestamps
  • A completely different approach to concurrency
    control is to assign each transaction a timestamp
    at the moment that it begins.
  • Every file in the system has a read and write
    timestamp.
  • It will normally be the case that when a process
    tries to access a file, the files read and write
    timestamp will be lower (older) then the
    transactions timestamp.

70
Timestamps
  • This ordering means that the transactions are
    being processed in the proper order, so
    everything is OK.
  • When the ordering is incorrect, this means that a
    transaction that started later has managed to
    access update and commit.
  • This situation means that the current transaction
    is late (working on an older copy of the data)
    and must be aborted.

71
Summary
  • We have been looking at synchronisation in
    distributed systems.
  • We have looked some of the methods of providing
    exclusive access to data.
  • Hopefully now have some idea of the problems
    associated with distributed processes
    concurrently trying to access some shared
    resource.
Write a Comment
User Comments (0)
About PowerShow.com