Title: Concurrency
1Concurrency
- Concurrency can occur at four levels
- Machine instruction level may have both an
adder and a multiplier that are used at the same
time. - 2. High-level language statement level might
have a loop (a,b,c) where c from one iteration
and a from the next are executed at the same
time. - 3. Unit level several methods execute together
- 4. Program level several program execute together
2Suppose we have two methods
- Populate marsh
- for(i0ilt 1000 i)
- create frog // high level language
statement - create carp
- create mosquitos
-
- Populate prehistoric world
- for (i0ilt10,i) create dinosaur(i)
3- Concurrency can occur at four levels
- (termed granularity)
- Machine instruction level Create frog is
decomposed into basic parts. If one basic
instruction is to fold both sides into center,
perhaps one processor folds the left side and
one folds the right. - 2. High-level language statement level -
- different parts of make frog happen together
- 3. Unit level populate marsh occurs with
populate prehistoric world - 4. Program level several programs (to do other
things not shown here) execute together
4What would be the advantages/disadvantages of
each type of parallelism?
5The Evolution of Multiprocessor Architectures
- 1. Late 1950s - One general-purpose processor and
one or more special-purpose processors for input
and output operations - 2. Early 1960s - Multiple complete processors,
used for program-level concurrency - 3. Mid-1960s - Multiple partial processors, used
for instruction-level concurrency - 4. Single-Instruction Multiple-Data (SIMD)
machines. The same instruction goes to all
processors, each with different data - e.g.,
vector processors - 5. Multiple-Instruction Multiple-Data (MIMD)
machines - Independent processors that can be synchronized
(unit-level concurrency)
6Making a Frog
Fold in sides
7Take lower corner and fold up to top. Repeat
with other side.
Fold into middle
Repeat
8Examples
- SIMD - all do the same things at the same time.
- All fold All Open All fold again
- Pipelined one person does fold, and then
passes. Problems? - MIMD all do different things
9(No Transcript)
10- Def A thread of control in a program is the
sequence of program points reached as control
flows through the program - Categories of Concurrency
- 1. Physical concurrency - Multiple independent
processors (multiple threads of control) - 2. Logical concurrency - The appearance of
physical concurrency is presented by time-sharing
one processor (software can be designed as if
there were multiple threads of control)
11What would be the advantage of logical
concurrency?
- Consider the TV remote as performing context
switch. - Why does one switch between multiple programs?
- What is downside to switch?
12Example Smart Remote Ads play when you are not
watching, assume program doesnt continue when
you arent watching it
- You might be an E-mail Junkie..
- You might be a computer science major
- Attraction to computer scientists
13Concerns?
- Is switching between tasks confusing? What would
need to be retained? - Is switching between tasks expensive? Would
there be a minimal size at which you spawn more
tasks? - What is the gain?
14- What is the gain?
- Models actual situation better
- response time
- Use delays in processing
15Why do we want parallelism?
- Price-performance curves
- Used to be paid more for computer - got more
(linear relationship between price and
performance). - Now, for little money, get a lot of power. As you
add more money, performance curve levels off.
Not an efficient way to get more performance - Parallelism is the answer string cheap
computers together to do more work.
16What is a Thread ?
- Just as multitasking OSs can run more than one
process concurrently, a process can do the same
by running more than a single thread. - Each Thread is a different stream of control that
can execute its instructions independently. - Compared to a process, a thread is inexpensive to
create, terminate, schedule or synchronize.
17What is a Thread ?
- A process is a HEAVY-WEIGHT kernel-level entity.
(process struct) - A thread is a LIGHT_WEIGHT entity comprising the
registers, stack and some other data. - The rest of the process struct is shared by all
threads. (address space, file desc, etc.) - Most of the thread structure is at the user space
allowing very fast access.
18So for our example
- If we had two processes to populate the marsh and
to populate the prehistoric world, each process
would be able to stand alone. - If we had two threads to populate the marsh and
to populate the prehistoric world, they would
have some shared resources (like the table or
paper supply)
19Concurrency Vs. Parallelism
- Concurrency means that two or more threads can be
in the middle of executing code. - Only one can be on the CPU though at any given
time. - Parallelism actually involves multiple CPUs
running threads at the same time. - Concurrency is the illusion of Parallelism
20What can threads do that cant be done by
processes sharing memory ?
- Answer Nothing !... If you have
- plenty of time to kill programming,
- more time to kill processing,
- willing to burn money by buying RAM
- Debugging cross-process programs are tough.
- In Solaris creating a thread is 30 TIMES FASTER
than forking a process. - Synchronization is 10 time faster with threads.
- Context Switching - 5 times faster
21What Applications to Thread?
- Multiplexing (communicate two or more signals
over a common channel) - Servers
- Synchronous Waiting (definition?)
- clients
- I/O
- Event Notification
- Simulations
- Parallelizable Algorithms
- Shared memory multiprocessing
- Distributed Multiprocessing
22Which Programs NOT to thread?
- Compute bounds threads on a uniprocessor.
- Very small threads (threads are not free)
- Old Code
- Parallel execution of threads can interfere with
each other. - WARNING Multithreaded applications are more
difficult to design and debug than single
threaded apps. Threaded programming design
requires careful preparation !
23Synchronization
- The problem -
- Data Race - occurs when more than one thread is
trying to update the same piece of data. - Critical Section - Any piece of code to which
access needs to be controlled. - The Solution -
- Mutex
- Condition Variables
- Operations - init, lock, unlock
24MUTEX
- A MUTual EXclusion allows exactly one thread
access to a variable or critical section of code. - Access attempts by other threads are blocked
until the lock is released.
25- Kinds of synchronization
- 1. Cooperation
- Task A must wait for task B to complete some
specific activity before task A can continue its
execution e.g., You cut the paper and then I fold
it. - 2. Competition
- When two or more tasks must use some resource
that cannot be simultaneously used e.g., we both
want the scissors.
26- Liveness means the unit will eventually complete
its execution. Im currently blocked from
finishing my frog, but I will eventually get to
finish. - In a concurrent environment, a task can easily
lose its liveness. You were supposed to wake me
up when the scissors became available, but you
forgot. - If all tasks in a concurrent environment lose
their liveness, it is called deadlock. I take the
paper and wait for the scissors. You take the
scissors and wait for the paper. Circular wait
is deadlock.
27Livelock theoretically can finish, but never get
the resources to finish.
- How do you prevent deadlock?
- How do you prevent livelock?
28Questions?
29- Methods of Providing Synchronization
- 1. Semaphores
- 2. Monitors
- 3. Message Passing
30Semaphores
- Dijkstra - 1965
- A semaphore is a data structure consisting of a
counter and a queue for storing task descriptors - Semaphores can be used to implement guards on the
code (controlling access) that accesses shared
data structures - Semaphores have only two operations, wait and
signal (originally called P and V by Dijkstra) - Semaphores can be used to provide both
competition and cooperation synchronization
31Example
- Suppose I was in a frog renting business.
- I have a collection of frogs.
- I keep track of my frogs via a semaphore
- When you come to rent a frog, if I have some, I
just adjust my semaphore (count). - If you come and I dont have one, I place you in
a queue.
frogAvail 4
32- Cooperation Synchronization with Semaphores
- Example A shared buffer e.g., holding area for
frogs - The buffer is implemented as an ADT with the
operations DEPOSIT and FETCH as the only ways to
access the buffer - Use two semaphores for cooperation emptyspots
(number of empty spots) and fullspots (number of
full spots)
33- DEPOSIT must first check emptyspots to see if
there is room in the buffer (for a new frog) - If there is room, the counter of emptyspots is
decremented and the value is inserted - If there is no room, the caller is stored in the
queue of emptyspots (to wait for room) - When DEPOSIT is finished, it must increment the
counter of fullspots
34- FETCH must first check fullspots to see if there
is an item - If there is a full spot, the counter of fullspots
is decremented and the value is removed - If there are no values in the buffer, the caller
must be placed in the queue of fullspots - When FETCH is finished, it increments the counter
of emptyspots - The operations of FETCH and DEPOSIT on the
semaphores are accomplished through two semaphore
operations named wait and signal.
35Semaphores
- wait(aSemaphore)
- if aSemaphores counter gt 0 then
- Decrement aSemaphores counter
- else
- Put the caller in aSemaphores queue
- Attempt to transfer control to some
- ready task
- end
36Semaphores
- signal(aSemaphore)
- if aSemaphores queue is empty then
- Increment aSemaphores counter
- else
- Put the calling task in the task ready
- queue
- Transfer control to a task from
- aSemaphores queue
- end
37Producer Code
- semaphore fullspots, emptyspots
- fullstops.count 0
- emptyspots.count BUFLEN
- task producer
- loop
- -- produce VALUE -
- wait (emptyspots) //wait for space
- DEPOSIT(VALUE)
- signal(fullspots)//increase filled
- end loop
- end producer
38Consumer Code
- task consumer
- loop
- wait (fullspots) //wait till not empty
- FETCH(VALUE)
- signal(emptyspots) //increase empty
- -- consume VALUE -
- end loop
- end consumer
39- Competition Synchronization with Semaphores
- A third semaphore, named access, is used to
control access to buffer itself as trying to
produce and consume at same time may be problem
(competition synchronization) - The counter of access will only have the values 0
and 1 - Such a semaphore is called a binary semaphore
- Note that wait and signal must be atomic!
40Producer Code
- semaphore access, fullspots, emptyspots
- access.count 0
- fullstops.count 0
- emptyspots.count BUFLEN
- task producer
- loop
- -- produce VALUE -
- wait(emptyspots) //wait for space
- wait(access) //wait for access
- DEPOSIT(VALUE)
- signal(access) //relinquish access
- signal(fullspots) //increase filled
- end loop
- end producer
41Consumer Code
- task consumer
- loop
- wait(fullspots)//wait till not empty
- wait(access) //wait for access FETCH(VALUE)
- signal(access) //relinquish access
- signal(emptyspots) //increase empty
- -- consume VALUE -
- end loop
- end consumer
42Semaphores
- Evaluation of Semaphores
- 1. Misuse of semaphores can cause failures in
cooperation synchronization, e.g., the buffer
will overflow if the wait of fullspots is left
out - 2. Misuse of semaphores can cause failures in
competition synchronization, e.g., the program
will deadlock if the release of access is left out
43Monitors
- Concurrent Pascal, Modula, Mesa, Java
- The idea encapsulate the shared data and its
operations to restrict access - A monitor is an abstract data type for shared data
44Monitor Buffer Operation
45Monitors
- Evaluation of monitors
- Support for competition synchronization is great.
Less chance for errors as system controls. - Support for cooperation synchronization is very
similar as with semaphores, so it has the same
problems
46Message Passing
- Message passing is a general model for
concurrency - It can model both semaphores and monitors
- It is not just for competition synchronization
- Central idea task communication is like seeing a
doctor--most of the time he waits for you or you
wait for him, but when you are both ready, you
get together, or rendezvous (dont let tasks
interrupt each other)
47Message Passing
- In terms of tasks, we need
- a. A mechanism to allow a task to indicate when
it is willing to accept messages - b. Tasks need a way to remember who is waiting to
have its message accepted and some fair way of
choosing the next message - Def When a sender tasks message is accepted by
a receiver task, the actual message transmission
is called a rendezvous
48Thank You!
49Java Threads
- Competition Synchronization with Java Threads
- A method that includes the synchronized modifier
disallows any other method from running on the
object while it is in execution - If only a part of a method must be run without
interference, it can be synchronized
50Java Threads
- Cooperation Synchronization with Java Threads
- The wait and notify methods are defined in
Object, which is the root class in Java, so all
objects inherit them - The wait method must be called in a loop
51- Basic thread operations
- A thread is created by creating a Thread or
Runnable object - Creating a thread does not start its concurrent
execution it must be requested through the
Start method - A thread can be made to wait for another thread
to finish with Join - A thread can be suspended with Sleep
52C Threads
- Synchronizing threads
- The Interlock class
- The lock statement
- The Monitor class
- Evaluation
- An advance over Java threads, e.g., any method
can run its own thread - Thread termination cleaner than in Java - abort
- Synchronization is more sophisticated
53Message Passing
Concepts synchronous message passing -
channel asynchronous message passing -
port - send and receive / selective receive
rendezvous bidirectional communications -
entry - call and accept ... reply Models
channel relabelling, choice guards port
message queue, choice guards entry
port channel Practice distributed
computing (disjoint memory) threads and
monitors (shared memory)
54 Synchronous Message Passing - channel
Sender send(e,c)
Receiver vreceive(c)
Channel c
one-to-one
- send(e,c) - send the value of the expression
e to channel c. The process calling the send
operation is blocked until the message is
received from the channel.
- v receive(c) - receive a value into local
variable v from channel c. The process calling
the receive operation is blocked waiting until a
message is sent to the channel.
cf. distributed assignment v e
55Demonstration of Channel
- Try to pass all objects to final destination
- To send- hold out object, but must be taken
before you can do anything else. - Advantages? Disadvantages?
56synchronous message passing - applet
A sender communicates with a receiver using a
single channel. The sender sends a sequence of
integer values from 0 to 9 and then restarts at 0
again.
Channel chan new Channel() tx.start(new
Sender(chan,senddisp)) rx.start(new
Receiver(chan,recvdisp))
Instances of ThreadPanel
Instances of SlotCanvas
57Java implementation - channel
class Channel extends Selectable Object chann
null public synchronized void send(Object
v) throws InterruptedException chann v
signal() while (chann ! null) wait()
public synchronized Object receive()
throws InterruptedException block()
clearReady() //part of Selectable Object tmp
chann chann null notifyAll() //could be
notify() return(tmp)
58Java implementation - sender
class Sender implements Runnable private
Channel chan private SlotCanvas display
Sender(Channel c, SlotCanvas d) chanc
displayd public void run() try int
ei 0 while(true)
display.enter(String.valueOf(ei))
ThreadPanel.rotate(12) chan.send(new
Integer(ei)) display.leave(String.valueOf
(ei)) ei(ei1)10 ThreadPanel.rotate(34
8) catch (InterruptedException
e)
59Java implementation - receiver
class Receiver implements Runnable private
Channel chan private SlotCanvas display
Receiver(Channel c, SlotCanvas d) chanc
displayd public void run() try
Integer vnull while(true)
ThreadPanel.rotate(180) if (v!null)
display.leave(v.toString()) v
(Integer)chan.receive()
display.enter(v.toString())
ThreadPanel.rotate(180) catch
(InterruptedException e)
60 selective receive
Sender send(e,c)
How should we deal with multiple channels?
Channels
Sender send(e,c)
Sendern send(en,cn)
c1
c2
cn
61selective receive
62Asynchronous Message Passing - port
Sender send(e,c)
Receiver vreceive(p)
Sender send(e,c)
Port p
Sendern send(en,p)
many-to-one
- send(e,c) - send the value of the expression
e to port p. The process calling the send
operation is not blocked. The message is queued
at the port if the receiver is not waiting.
- v receive(c) - receive a value into local
variable v from port p. The process calling the
receive operation is blocked if there are no
messages queued to the port.
63asynchronous message passing - applet
Two senders communicate with a receiver via an
unbounded port. Each sender sends a sequence
of integer values from 0 to 9 and then restarts
at 0 again.
Port port new Port() tx1.start(new
Asender(port,send1disp)) tx2.start(new
Asender(port,send2disp)) rx.start(new
Areceiver(port,recvdisp))
Instances of ThreadPanel
Instances of SlotCanvas
64Java implementation - port
class Port extends Selectable Vector queue
new Vector() public synchronized void
send(Object v) queue.addElement(v)
signal() public synchronized Object
receive() throws InterruptedException
block() clearReady() Object tmp
queue.elementAt(0) queue.removeElementAt(0)
return(tmp)
The implementation of Port is a monitor that has
synchronized access methods for send and receive.
65 Rendezvous - entry
Rendezvous is a form of request-reply to support
client server communication. Many clients may
request service, but only one is serviced at a
time.