More Shared Memory Programming And Intro to Message Passing - PowerPoint PPT Presentation

About This Presentation
Title:

More Shared Memory Programming And Intro to Message Passing

Description:

More Shared Memory Programming And Intro to Message Passing – PowerPoint PPT presentation

Number of Views:67
Avg rating:3.0/5.0
Slides: 14
Provided by: laxmika
Learn more at: http://charm.cs.uiuc.edu
Category:

less

Transcript and Presenter's Notes

Title: More Shared Memory Programming And Intro to Message Passing


1
More Shared Memory ProgrammingAnd Intro to
Message Passing
  • Laxmikant Kale
  • CS433

2
Posix Threads on Origin 2000
  • Shared memory programming on Origin 2000
    Important calls
  • Thread creation and joining
  • pthread_create(pthread_t threadID,
    At,functionName, (void ) arg)
  • pthread_join(pthread_t, threadID, void result)
  • Locks
  • pthread_mutex_t lock
  • pthread_mutex_lock(lock)
  • pthread_mutex_unlock(lock)
  • Condition variables
  • pthread_cond_t cv
  • pthread_cond_init(cv, (pthread_condattr_t ) 0)
  • pthread_cond_wait(cv, cv_mutex)
  • pthread_cond_broadcast(cv)
  • Semaphores, and other calls

3
Declarations
/ pgm.c / include ltpthread.hgt include
ltstdlib.hgt include ltstdio.hgt define nThreads
4 define nSamples 1000000 typedef struct
_shared_value pthread_mutex_t lock int
value shared_value shared_value sval
4
Function in each thread
void doWork(void id) size_t tid (size_t)
id int nsucc, ntrials, i ntrials
nSamples/nThreads nsucc 0
srand48((long) tid) for(i0iltntrialsi)
double x drand48() double y
drand48() if((xx yy) lt 1.0)
nsucc pthread_mutex_lock((sval.lock))
sval.value nsucc pthread_mutex_unlock((sval
.lock)) return 0
5
Main function
int main(int argc, char argv) pthread_t
tidsnThreads size_t i double est
pthread_mutex_init((sval.lock), NULL)
sval.value 0 printf("Creating Threads\n")
for(i0iltnThreadsi)
pthread_create(tidsi, NULL, doWork, (void )
i) printf("Created Threads... waiting for them
to complete\n") for(i0iltnThreadsi)
pthread_join(tidsi, NULL) printf("Threads
Completed...\n") est 4.0 ((double)
sval.value / (double) nSamples)
printf("Estimated Value of PI lf\n", est)
exit(0)
6
Compiling Makefile
Makefile for solaris FLAGS -mt for
Origin2000 FLAGS pgm pgm.c cc -o
pgm (FLAGS) pgm.c -lpthread clean rm
-f pgm .o
7
Message Passing
  • Program consists of independent processes,
  • Each running in its own address space
  • Processors have direct access to only their
    memory
  • Each processor typically executes the same
    executable, but may be running different part of
    the program at a time
  • Special primitives exchange data send/receive
  • Early theoretical systems
  • CSP communicating sequential processes
  • send and matching receive from another processor
    both wait.
  • OCCAM on Transputers used this model
  • Performance problems due to unnecessary(?) wait
  • Current systems
  • Send operations dont wait for receipt on remote
    processor

8
Message Passing
send
receive
copy
data
data
PE0
PE1
9
Basic Message Passing
  • We will describe a hypothetical message passing
    system,
  • with just a few calls that define the model
  • Later, we will look at real message passing
    models (e.g. MPI), with a more complex sets of
    calls
  • Basic calls
  • send(int proc, int tag, int size, char buf)
  • recv(int proc, int tag, int size, char buf)
  • Recv may return the actual number of bytes
    received in some systems
  • tag and proc may be wildcarded in a recv
  • recv(ANY, ANY, 1000, buf)
  • broadcast
  • Other global operations (reductions)

10
Pi with message passing
Int count, c1 main() Seed s
makeSeed(myProcessor) for (I0 Ilt100000/P
I) x random(s) y random(s)
if (xx yy lt 1.0) count send(0,1,4,
count)
11
Pi with message passing
if (myProcessorNum() 0) for (I0
IltmaxProcessors() I) recv(I,1,4,
c) count c printf(pif\n,
4count/100000) / end function main /
12
Collective calls
  • Message passing is often, but not always, used
    for SPMD style of programming
  • SPMD Single process multiple data
  • All processors execute essentially the same
    program, and same steps, but not in lockstep
  • All communication is almost in lockstep
  • Collective calls
  • global reductions (such as max or sum)
  • syncBroadcast (often just called broadcast)
  • syncBroadcast(whoAmI, dataSize, dataBuffer)
  • whoAmI sender or receiver

13
Standardization of message passing
  • Historically
  • nxlib (On Intel hypercubes)
  • ncube variants
  • PVM
  • Everyone had their own variants
  • MPI standard
  • Vendors, ISVs, and academics got together
  • with the intent of standardizing current practice
  • Ended up with a large standard
  • Popular, due to vendor support
  • Support for
  • communicators avoiding tag conflicts, ..
  • Data types
  • ..
Write a Comment
User Comments (0)
About PowerShow.com