Message Passing Computing - PowerPoint PPT Presentation

1 / 25
About This Presentation
Title:

Message Passing Computing

Description:

PVM daemon. executable. Workstation. PVM daemon. executable ... between computers is done by PVM daemon processes installed by PVM on computers in the network. ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 26
Provided by: ITS8213
Category:

less

Transcript and Presenter's Notes

Title: Message Passing Computing


1
Message Passing Computing
  • First steps in MPI programming

2
Message Passing Multicomputer
Programs running on separate computers keep in
check by passing messages to each other Eg
master-slave approach
Interconnection network
Messages
Processor
Local memory
Computers
3
Message passing computing using user-level
message passing libraries
  • Two primary mechanisms needed
  • A method of creating separate processes for
    execution on different computers
  • A method of sending and receiving messages

4
Recall the two MIMD programming models
Multiple Program Multiple Data (MPMD) Each
processor will have its own program to execute
Single Program Multiple Data (SPMD) A single
source program is written, and each
processor executes its own personal copy of the
program
5
Multiple Program Multiple Data Model
Separate programs for each processor. One
processor executes master process. Other
processes started from within master process -
dynamic process creation.
Process 1
Start execution of process 2
Spawn()
Process 2
Time
6
Parallel Virtual Machine
PVM is a software library for parallel
programming under the MPMD model, developed by
Oak Ridge National Laboratories Programmer
decomposes program into separate programs -
usually a master program and a group of identical
slave programs. Each program compiled to execute
on specific types of computers. Set of computers
used on a problem must be defined prior to
executing the programme.
7
Message routing between computers is done by PVM
daemon processes installed by PVM on computers in
the network.
This set of predefined computers form the virtual
machine.
8
Single Program Multiple Data Model
Same program runs on all processors. Within
program, control statements select different
parts for each processor to execute. All
executables started together - static process
creation.
Source file
Compile to suit processor
9
MPI (Message Passing Interface)
Standard for communication across several
processors, developed by group of academics and
industrial partners to foster widespread use and
portability MPI is a standard - it defines
routines, not implmentations Several free
implementations exist.
10
MPICH
  • MPI defines a standard interface of which there
    are several implementations
  • MPICH is the implementation we will use
  • Version 1.2.5 is default on the Sisters
  • Version 2-0.97 also available on the Sisters
  • Latest version 2-1.0 available from
  • http//www-unix.mcs.anl.gov/mpi/mpich

11
MPI Characteristics
  • Bindings to Fortran/Fortran90/C/C
  • Routines start with MPI_
  • MPI errors
  • Returns an int status
  • Aborts on error by default

12
MPI Process creation and Execution
  • Purposely not defined and will depend upon
    implementation
  • Only static process creation is supported in MPI
    version 1. All processes must be defined prior to
    execution and started together
  • Originally SPMD model but MPMD now possible

13
Communicators
  • Defines scope of a communication operation
  • Initially all processes enrolled in a universe
    called MPI_COMM_WORLD.
  • Each process is given a unique rank - a number
    from 0 to n-1
  • Other communicators can be established for groups
    of processes

14
MPI program structure in the SPMD model
main(int argc, char argv) MPI_Init(argc,
argv) . . MPI_Comm_rank(MPI_COMM_WORLD,
myrank) if (myrank 0) // Master
operations else // Slave
operations . . MPI_Finalize()
15
First MPI program
  • Task calculate the sum all all integers from 0
    to N
  • Do this by calculating the first and second
    partial sums in parallel

16
Master/slave approach
SLAVE
MASTER
Input N
Send N
Compute sum1 0,..N/2-1
Compute sum2 N/2,,N-1
Send sum2
Compute sumsum1sum2
Output sum
17
Pseudo-code
if (this is the master) get N from user send
N to the slave calculate lower partial
sum receive upper partial sum from the
slave add lower and upper partial sums output
result else receive N from the
master calculate upper partial sum send this
partial sum to the master
18
//first.c Adding 10 numbers using two
nodes include "mpi.h" include
ltstdio.hgt //always use argc and argv, as mpirun
will pass the appropriate parms. int
main(argc,argv)int argcchar argv int
N,i,result,sum0,sum1,myid MPI_Status
Stat//status variable, so operations can be
checked MPI_Init(argc,argv)//INITIALIZE
MPI_Comm_rank(MPI_COMM_WORLD, myid)//which node
is this? // Segment to be executed by
master if (myid 0) Natoi(argv1)//ge
t the number the user wants //Master sends
'N' to slave MPI_Send(N, 1, MPI_INT, 1,0,
MPI_COMM_WORLD) sum00//partial result for
node 0 for(i1iltN/2i)
sum0sum0i resultsum0 //Master
waits to receive 'sum1' from slave
MPI_Recv(sum1, 1, MPI_INT, 1,0, MPI_COMM_WORLD,
Stat) resultsum0sum1//adds two partial
results fprintf(stdout,"The final result is
d \n",result) fprintf(stdout, "d\n",
sizeof(MPI_Status)) // Segment to be
executed by slave else if (myid 1)
//slave waits to receive 'N' from master
MPI_Recv(N, 1, MPI_INT, 0, 0, MPI_COMM_WORLD,
Stat) sum10 for(iN/21iltNi)
sum1sum1i //slave sends 'sum1' to
master MPI_Send(sum1, 1, MPI_INT, 0, 0,
MPI_COMM_WORLD) MPI_Finalize()
C code
19
include ltiostreamgt include "mpi.h" int main(int
argc,char argv) MPIInit(argc,argv)
int myid MPICOMM_WORLD.Get_rank() if
(myid 0) int Natoi(argv1)//get the
number the user wants //Master sends 'N' to
slave MPICOMM_WORLD.Send(N, 1, MPIINT,
1,0) int sum00//partial result for node
0 for(int i1i lt N/2 i) sum0sum0
i int resultsum0 //Master
waits to receive 'sum1' from slave int sum1
MPICOMM_WORLD.Recv(sum1, 1, MPIINT,
1,0) resultsum0sum1//adds two partial
results stdcout ltlt "The final result is "
ltlt result ltlt stdendl else if (myid
1) //slave waits to receive 'N' from
master int N MPICOMM_WORLD.Recv(N, 1,
MPIINT, 0, 0) int sum10 for(int
iN/21 iltN i) sum1sum1 i
//slave sends 'sum1' to master
MPICOMM_WORLD.Send(sum1, 1, MPI_INT, 0, 0)
MPIFinalize()
C code
20
Who am I? Who are they?
A process finds out who it is (its rank) by
int myid MPI_Comm_rank(MPI_COMM_WORLD, myid)
To find the total number of processes
int numproc MPI_Comm_size(MPI_COMM_WORLD,
numproc)
To get the host name of the node
int namelen char nameMPI_MAX_PROCESSOR_NAME MP
I_Get_processor_name(name, namelen)
21
MPI Send Routine
Number of items to send
Address of send buffer
Datatype of each item
Returns error status
int MPI_Send(void buf, int count, MPI_Datatype
datatype, int dest, int tag, MPI_Comm comm)
Communicator
Rank of destination process
Message tag used to distinguish between
different messages a process may send/receive
22
MPI Receive routine
Number of items to receive
Address of receive buffer
Datatype of each item
Returns error status
int MPI_Recv(void buf, int count, MPI_Datatype
datatype, int source, int tag, MPI_Comm comm,
MPI_Status status)
Rank of source process
Status after operation
Message tag
Communicator
23
Second MPI program
This time use more than two nodes
Master
Send N to slaves
Receive all sum1s
sum0
recv N
recv N
recv N
send sum1
send sum1
send sum1
24
include "mpi.h" include ltstdio.hgt int
main(argc,argv)int argcchar argv int
numproc, myid, namelen int
N,i,result,sum0,sum1 char processor_nameMPI_MA
X_PROCESSOR_NAME MPI_Status Stat//status
variable, so operations can be checked
MPI_Init(argc,argv)//INITIALIZE
MPI_Comm_size(MPI_COMM_WORLD, numproc) //how
many processors?? MPI_Comm_rank(MPI_COMM_WORLD,
myid) //what is THIS processor-ID? //what
is THIS processor name (hostname)?
MPI_Get_processor_name(processor_name,namelen)
printf("IDd s\n", myid, processor_name)
if (myid 0) //this is the master
Natoi(argv1)//get the number the user wants
for (i1iltnumproci) //send to all nodes
MPI_Send(N, 1, MPI_INT, i,0,
MPI_COMM_WORLD) sum00//partial
result for node 0 for(i1iltN/numproci)
sum0sum0i resultsum0 for
(i1iltnumproci) //receive from all nodes
MPI_Recv(sum1, 1, MPI_INT, i,0,
MPI_COMM_WORLD, Stat) resultresultsum1/
/adds the various sums
fprintf(stdout,"The sum from 1 to d is d
\n",N,result) else //this is not the
master MPI_Recv(N, 1, MPI_INT, 0, 0,
MPI_COMM_WORLD, Stat) sum10
for(i(N/numprocmyid)1ilt(N/numproc(myid1))i
) sum1sum1i
MPI_Send(sum1, 1, MPI_INT, 0, 0,
MPI_COMM_WORLD) MPI_Finalize()
C code second.c
25
(Some) MPI Datatypes
User defined types are possible
Write a Comment
User Comments (0)
About PowerShow.com