Chapter 7: Concurrent Programming - PowerPoint PPT Presentation

1 / 52
About This Presentation
Title:

Chapter 7: Concurrent Programming

Description:

Concurrent programming is the name given to programming notation and techniques ... to study parallelism without getting bogged down in the implementation details. ... – PowerPoint PPT presentation

Number of Views:63
Avg rating:3.0/5.0
Slides: 53
Provided by: BTH3
Category:

less

Transcript and Presenter's Notes

Title: Chapter 7: Concurrent Programming


1
Chapter 7 Concurrent Programming
2
Definition of Concurrent programming
  • Concurrent programming is the name given to
    programming notation and techniques for
    expressing potential parallelism and solving the
    resulting synchronization and communication
    problems. Implementation of parallelism is a
    topic in computer systems (hardware and software)
    that is essentially independent of concurrent
    programming . Concurrent programming is important
    because it provides an abstract setting in which
    to study parallelism without getting bogged down
    in the implementation details.
  • (Ben-Ari 1982).

3
The notion of process
  • Many languages, such as C, Pascal, FORTRAN and
    COBOL, are sequential. This is not adequate for
    real-time applications (remember process control
    example)
  • A parallel program consists of a number of
    autonomous sequential processes executing
    (logically) in parallel.
  • Each process has its own thread of control.

4
Actual execution of processes
  • Multiplexed on a single processor
  • Multiplexed on a multiprocessor system where
    there is access to shared memory
  • Multiplexed on several processors that do not
    share memory (distributed system)
  • Distributed systems (usually) require a message
    based programming approach.

5
Process states
  • Non-existing
  • Created
  • Initializing
  • Executable (running, ready)
  • (blocked)
  • Terminated

6
Run-Time Support System (RTSS)
  • Process management and scheduling is handled by
    the RTSS
  • The RTSS can be either
  • A part of the application
  • A part of the programming language (Ada, Java)
  • A part of the operating system (POSIX, N.B. there
    is a difference between threads and processes)
  • Implemented in hardware
  • The scheduling algorithms in the RTSS will affect
    the real-time behavior of the system.
  • The functional behavior should not be affected by
    the scheduling decisions.

7
Concurrent programming constructs
  • There are three fundamental facilities that must
    be supported by any parallel programming language
    or operating system
  • The expression of concurrent execution through
    the notion of processes
  • Process synchronization
  • Interprocess communication
  • One can distinguish between three types of
    behavior
  • Independent processes
  • Cooperating processes
  • Competing processes

8
Concurrency models
  • Structure (static vs. dynamic processes)
  • Level (nested or flat process structure)
  • Granularity (fine grained vs. coarse grained,
    most languages support primarily coarse grained
    parallelism)
  • Initialization (send parameters or communicate
    after creation)
  • Termination (completion of process body, suicide,
    abortion, untrapped error, when no longer needed,
    never).
  • Process representation (coroutines, fork-join,
    cobegin-coend, explicit process declaration)

9
Concurrent execution in Ada - 1
  • Procedure Example 1 istask Atask Btask body
    A is begin put_line(A)end A
  • task body B isbegin put_line(B)end B
  • begin
  • null
  • end Example1

10
Concurrent execution in Ada - 2
  • Procedure Example 1 istask type A_Typetask B
  • A1, A2 A_Typetask body A_type is begin
    put_line(A)end A
  • task body B isbegin put_line(B)end B
  • begin
  • null
  • end Example1

11
Concurrent execution in Ada - 3
  • Procedure Example 1 istask type A_Typetype
    A_Point is access A_Typetask B
  • A A_Pointtask body A_type is begin
    put_line(A)end A
  • task body B isbegin put_line(B)end B
  • begin
  • A new A_point
  • A new A_point
  • A new A_point
  • end Example1

12
Concurrent execution in Java - 1
  • public class Threads1 public static void
    main(String arg) ThreadA a
  • new ThreadA() ThreadB b new
    ThreadB() a.start() b.start()
  • class ThreadA extends Thread public void
    run() System.out.println("A")
  • class ThreadB extends Threadpublic void
    run() System.out.println("B")

13
Concurrent execution in Java - 2
  • public class Threads2 public static void
    main(String arg) ThreadA a1 new
    ThreadA() ThreadA a2 new ThreadA()
    ThreadB b new ThreadB() a1.start() a2.sta
    rt() b.start()
  • class ThreadA extends Thread public void
    run() System.out.println("A")
  • class ThreadB extends Threadpublic void
    run() System.out.println("B")

14
Concurrent execution in C/Posix - 1
  • / cc Threads1.c -lpthread -o Threads1/include
    ltpthread.hgt
  • include ltstdio.hgt
  • void threadA(void dummy)
  • printf("A\n")
  • return 0
  • void threadB(void dummy) printf("B\n")
  • return 0
  • int main()
  • pthread_t a
  • pthread_t b
  • pthread_create(a, 0, threadA, 0)
  • pthread_create(b, 0, threadB, 0)
  • pthread_join(a, 0)
  • pthread_join(b, 0)

15
Concurrent execution in C/Posix - 2
  • /cc Threads2.c -lpthread -o Threads2/
  • include ltpthread.hgt
  • include ltstdio.hgt
  • void threadA(void dummy) printf("A\n")
  • return 0
  • void threadB(void dummy) printf("B\n")
  • return 0
  • int main()
  • pthread_t a1, a2
  • pthread_t b
  • pthread_create(a1, 0, threadA,0)
  • pthread_create(a2, 0, threadA,0)
    pthread_create(b, 0, threadB,0)
    pthread_join(a1, 0) pthread_join(a2, 0)
    pthread_join(b, 0)

16
Context Switch - 1
Process B 20 Store 20, R1 21 Sub 1, R122
Store R1, R2
Process A 10 Store 10, R1 11 Add 1,R112
Store R1, R2
Real-time kernel Switch A-to-B Push R1
Push R2 Store SP, Save_SPA Load SP,
Save_SPB Pop R2 Pop R1 RET
Time Interrupt !!!
17
Context Switch - 2
Process B 20 Store 20, R1 21 Sub 1, R122
Store R1, R2
Process A 10 Store 10, R1 11 Add 1,R112
Store R1, R2
Real-time kernel Switch B-to-A Push R1
Push R2 Store SP, Save_SPB Load SP,
Save_SPA Pop R2 Pop R1 RET
Time Interrupt !!!
18
Process Control Block (PCB)
Ready Queue
Process A
Process B
Next
Next
Stack Pointer
Stack Pointer
Possible to handle an arbitrary number of
processes
19
Chapter 8 Shared variable-based synchronization
and communication
20
Process Synchronization
  • Processes often need to synchronize, e.g. one
    process may need to wait until another process
    has reached a certain point.
  • Process synchronization can be either based on
    message passing or based on shared variables.

21
Build House
Process Wall Begin Wait_until(Floor_finished)
Build_walls Signal(Walls_finished)
Put_up_wallpaper End
Process Floor Begin Build_floor
Signal(Floor_finished) Put_on_carpet End
Program House Begin Start_process(Floor)
Start_process(Wall) Start_process(Roof) End
Process Roof Begin Wait_until (Walls_finished)
Build_Roof End
22
Race Condition
  • Process A
  • X X1
  • -- start critical section
  • -- Load X, R1
  • -- Add 1,R1
  • -- Store R1,X
  • -- end critical section
  • Process B
  • X X1
  • -- start critical section
  • -- Load X, R1
  • -- Add 1,R1
  • -- Store R1,X
  • -- end critical section

23
Mutual exclusion 1
  • Process A
  • Loop
  • flag1 1
  • while flag2 1 do
  • null
  • end
  • X X1
  • flag1 0
  • End loop
  • Process B
  • Loop
  • flag2 1
  • while flag1 1 do
  • null
  • end
  • X X1
  • flag2 0
  • End loop

24
Mutual exclusion 2
  • Process A
  • Loop
  • while flag2 1 do
  • null
  • end
  • flag1 1
  • X X1
  • flag1 0
  • End loop
  • Process B
  • Loop
  • while flag1 1 do
  • null
  • end
  • flag2 1
  • X X1
  • flag2 0
  • End loop

25
Mutual exclusion 3
  • Process A
  • Loop
  • while flag 1 do
  • null
  • end
  • X X1
  • flag 2
  • End loop
  • Process B
  • Loop
  • while flag 2 do
  • null
  • end
  • X X1
  • flag 1
  • End loop

26
Test-and-set
  • Process A
  • Loop
  • while test_and_set(flag) do
  • null
  • end
  • X X1
  • flag 0
  • End loop
  • Process B
  • Loop
  • while test_and_set(flag) do
  • null
  • end
  • X X1
  • flag 0
  • End loop

27
Suspend and resume
  • Process A
  • Loop
  • while test_and_set(flag) do
  • suspend
  • end
  • X X1
  • flag 0
  • resume(B)
  • End loop
  • Process B
  • Loop
  • while test_and_set(flag) do
  • suspend
  • end
  • X X1
  • flag 0
  • resume(A)
  • End loop

28
Process states
Ready
Running
Suspend
Resume
Terminated
Non-existing
Blocked
29
Semaphores
  • A non-negative integer (and a hidden process
    queue) that can only be operated on by two
    (three) primitves
  • Wait(S)
  • Signal(S)
  • (Initialize(S))
  • A semaphore S has two fields
  • Counter (integer value)
  • Queue (queue with blocked processes)

30
Mutual exclusion with Semaphores
  • var mutex semaphoreInit(mutex,1)
  • Process A
  • Loop
  • Wait(S)
  • X X1
  • Signal(S)
  • End Loop
  • Process B
  • Loop
  • Wait(S)
  • X X1
  • Signal(S)
  • End Loop

31
Wait and Signal - 1
  • Procedure Wait(S)begindisable_interruptsIf
    S.counter 0 then S.queue MyPCB MyPCB
    First(ReadyQ) Switch(S.queue, MyPCB)
  • else S.counter S.counter 1
  • end if,
  • enable_interrupts
  • end
  • Procedure Signal(S)begindisable_interruptsIf
    S.queue ltgt null then Into_ReadyQ(First(S.queue)
    else S.counter S.counter 1
  • end if,
  • enable_interrupts
  • end

32
Wait and Signal - 2
PCB C
PCB A
Reaqy Queue
Semaphore S
PCB B
PCB D
counter
queue
33
Deadlock, livelock and indefinite postponement
  • Processes are blocked waiting for each other.
  • Livelock is almost the same thing, but in this
    case the processes are not blocked but waiting in
    a busy wait (spinning loop)
  • Indefinite postponement (lockout,starvation)
    means that a process never is allowed to access a
    certain resource because other processes are
    always using the resource.

34
Deadlock example
  • Var M1 Semaphore
  • Init(M1,1)
  • Process A
  • Loop
  • Wait(M1)
  • Wait(M2)
  • ltcritical regiongt
  • Signal(M1)
  • Signal(M2)
  • End loop
  • Var M2 Semaphore
  • Init(M2,1)
  • Process B
  • Loop
  • Wait(M2)
  • Wait(M1)
  • ltcritical regiongt
  • Signal(M1)
  • Signal(M2)
  • End loop

35
Buffer example
  • Program BufferSize Integer 32Buf array
    (0..Size-1) of integerTop, Base
    integerMutex, Space_available, Item_avaiable
    semaphore
  • procedure append(I in integer)
  • begin
  • wait(Space_available) wait(Mutex)
  • Buf(Top) I
  • Top Top1 mod Size
  • signal(Mutex)
  • signal(Item_available)
  • end append
  • Init(Mutex,1)Init(Space_available, Size)
  • Int(Item_available, 0)
  • procedure take(I out integer)
  • begin
  • wait(Item_available) wait(Mutex)
  • I Buf(Base)
  • Base Base1 mod Size
  • signal(Mutex)
  • signal(Space_available)
  • end take

36
Problems with semaphores
  • Semaphores are error prone because we only need
    to misplace one wait or signal to corrupt the
    behavior of the entire program.
  • We therefore need better primitives, such as
    conditional critical regions, monitors, protected
    objects and synchronized methods.
  • All these techniques are equivalent from a
    functional perspective, but programs may become
    more or less error prone.

37
Conditional Critical regions (CCR)
  • Matches waits and signals automatically
  • The synchronization code is still spread out over
    the entire program
  • resource Buf
  • region Buf when buffer.size lt N do -- place
    item in Bufend region
  • region Buf when buffer.size gt 0 do -take item
    from Bufend region

38
Monitors
  • Isolates all synchronization code to one place in
    the program
  • Monitor buffer
  • procedure append() .Procedure take().
  • BeginInitialize
  • End buffer

39
Monitors and PCBs
40
Conditions in monitors
  • A process will sometimes need to be blocked
    inside a monitor, e.g. when a buffer is full.
    This is handled by condition variables
  • There is a wait and a signal for each condition
    variable.
  • In the case of monitors and the queue is empty
    the signal operation has no effect. This is a
    difference compared to the semaphore case.
  • A signal operation may cause two processes to
    become active within the monitor at the same
    time. To prohibit this four different approach
    can be used
  • A signal is allowed only as a last action
  • A signal acts as a return statement for the
    process executing the signal
  • The process executing the signal becomes blocked
    until the monitor is free
  • The freed process becomes active only when the
    monitor is free

41
Protected objects
  • The condition variables can make it difficult to
    read and write monitor code
  • A protected object is a monitor where the
    condition variables have been replaced by guards.
  • The effect is that a process may be refused to
    enter the protected object (monitor) for two
    reasons
  • There is already a process in the protected
    object (monitor) this is the same as for
    monitors
  • The guard evaluates to false, e.g. the buffer is
    empty and we try to do a take.

42
Synchronized methods
  • Methods labeled synchronized require that one can
    obtain exclusive access to the lock associated
    with the object
  • An object can have both synchronized and
    unsynchronized (ordinary) methods

43
Synchronized blocks
  • It is possible to define a block as synchronized
    anywhere in the program code.
  • There is an object that is use as a parameter for
    the synchronized block
  • The semantics of the synchronized block construct
    guarantees that, at any given time, only one
    thread can be in a block defined by a certain
    object

44
Waiting and notifying in Java
  • There are three primitives
  • Wait()
  • Notify()
  • NotifyAll()
  • These are used to build monitors and they must be
    used from synchronized methods only.

45
Chapter 9 Message-based synchronization and
communication
46
Process Synchronization
  • Asynchronous (no-wait)The sender proceeds
    immediately (sending a post card)
  • SynchronousThe sender proceeds only when the
    message has been received (telephone call)
  • Remote invocation (extended rendezvous)The
    sender proceeds only when a reply has been
    returned from the sender.

47
Asynchronous communication
  • The other two ways of communication can be
    implemented with asynchronous primitives, i.e. it
    is a flexible model (the other way around is,
    however, also possible by introducing a buffer
    process)
  • There are, however, a number of drawbacks
  • Potentially infinite buffers are needed to store
    messages
  • Most programs will expect an acknowledgement
    anyway (i.e. they expect a synchronous model)
  • More communications are needed. This results in
    more complex programs.
  • Difficult to prove correctness

48
Process naming
  • There are two issues concerning process naming
  • Direction vs. indirection
  • Symmetry
  • One can use either direct naming, e.g. send
    message to process X or indirect naming, e.g.
    send message to mailbox Y
  • If the receiver and the sender use the same kind
    of naming we say that the scheme is symmetric.
  • Client-server schemes are asymmetric.

49
Message passing in Ada
  • In order for a task (process) to receive a
    message it must define an entry.
  • To receive a message involves accepting the call
    to the appropriate entry.

50
Ada example 1
  • procedure Test is
  • task T1 isentry X( I integer J out
    integer)
  • end T1
  • task T2
  • task body T1 is
  • begin
  • for K in 1..10 loopaccept X( I integer J
    out integer) do J I K
  • end X
  • end loop
  • end T1
  • task body T2 is
  • M integer
  • begin
  • for L in 1..10 loop
  • begin
  • T1.X(L,M)
  • Put_Line(M)
  • end loop
  • end T2
  • beginnull
  • end Test

51
Ada example 2
  • procedure Counting is
  • task T1 isentry INCentry Print
  • end T1
  • task type T2
  • task body T1 is
  • X integer 0
  • begin
  • loopselect accept INC do begin X X1
    end INC
  • or
  • accept Print do begin Put_Line(X) end
    Print
  • or
  • terminate
  • end select
  • end loop
  • end T1
  • task body T2 is
  • M integer
  • begin
  • for L in 1..10 loop
  • begin
  • T1.INC
  • end loopT1.Print
  • end T2
  • Count1, Count2 T2
  • beginnull
  • end Test

52
Remote Procedure Call (RPC) and Remote Metod
Invocation (RMI)
  • These are techniques for communicating between
    different computers.
  • The receiving process must be waiting in a
    receive statement.
  • The sending process is blocked until the
    receiving process provides an answer.
  • RPC and RMI are usually based on socket
    communication in Unix systems
Write a Comment
User Comments (0)
About PowerShow.com