Title: MPI IN ONE DAY
1MPI IN ONE DAY
- Stefano Leonardi
- Gustavo Gutierrez
2(No Transcript)
3(No Transcript)
4(No Transcript)
5(No Transcript)
6Communication through the area memory only if on
the same computer Either way on the network.
7What is MPI?
- MPI means of Message Passing Interface
- May 94, released MPI-1 (contributors from 60
people) - Objectives
- Portable
- Efficient
- Easy (everything can be done with 6 functions)
8- The standard defines the names, the order of the
calls and how should be called from C and MPI - Vendors of machines are free to implement and
optimize subroutines to their machine. - Is it large or small? MPI1 contains 128
functions, but most of the problems can be solved
calling just 6.
9- Each parallel application is made of N autonomous
processes each one is independent and can
exchange data with the others - There is no constrain on the processes to run on
different CPUs, but it is important for high
performance
10The model used is SPMD, Single Program Multiple
Data
Each process executes the same program but can
work on different data ed execute different
operations
call MPI_XXXX(parameter,, err) MPI_ is the
prefix of all the subroutines MPI Even if
Fortran is case insensitive, subroutine and
constant MPI are written in Last parameter is
the error flag (INTEGER)
11Structure of a program
lheader file standard mpi.h for C mpif.h
for Fortran
12Program hello world
program main include 'mpif.h'
call MPI_INIT( ierr ) call MPI_COMM_RANK(
MPI_COMM_WORLD, my_node, ierr ) call
MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
write(,)'Hello world from
process',my_node,' of ',numprocs if
(my_node.eq.3) then c66
write(,)c,'Look, I am node',my_node
endif call MPI_FINALIZE(rc) stop
end
13MPI_COMM_WORLD
Each communicator Is identified by a number
Represents a group of processes which can
communicate. You can use At any time, after
called init
14A process can determine the size of a
communicator by calling MPI_Comm_size
A process can determine its id (rank) within the
common environment with MPI_Comm_rank I rank of
the processes are integers, the smaller being the
0
15First Program
16Communication
Sender send a message, receiver receives it
(point to point)
It contains the addresses and the content
17buffer data of the message datatype type count
number of data
source the id of the sender destination id of
receiver communicator the id of the group of
processes tag an id to classify the message
18OUT BUF array where the data have to be
stored IN COUNT dimension of data to
receive IN DTYPE type of data to receive
IN SRC the id of the source the message comes
from IN TAG id of the message IN COMM
communicator OUT STATUS the received
message OUT ERR code of error
MPI compares the envelope of the messages
waiting to be received with the pending
messages if the message is present is
received either way the call cannot be
completed until one of the pending messages match
19Program send rec
program main include 'mpif.h'
INTEGER STATUS(MPI_STATUS_SIZE) call
MPI_INIT( ierr ) call MPI_COMM_RANK(
MPI_COMM_WORLD, my_node, ierr ) call
MPI_COMM_SIZE( MPI_COMM_WORLD, numprocs, ierr )
csum0. n2do100/numprocs
icheckn2donumprocs if (icheck.ne.n2do)
then write(,)'Mistake' go to
100 endif ..
20 . csum0. do
j1,n2do jjmy_noden2doj
csumcsumjj enddo
write(,)'My_node',my_node,csum,n2do
csum2csum if (my_node.eq.0) then
do l1,numprocs-1 call
MPI_Recv(sumrec,1,MPI_REAL,l,0, 1
MPI_COMM_WORLD,status,ierr)
csum2csum2sumrec enddo
write(,)'total is',csum2 else
call MPI_Send(csum,1,MPI_REAL,0,0,MPI_COMM_WORLD,i
err) endif 100 call MPI_FINALIZE(rc)
stop end
21Assignment
- 1. Process 0 reads A from standard input
- 2. At tt1 process0 sends A to process1
- 3. At tt2 process1 sends to proc2
- .
- .
- .
- .
- .
- At ttn processN-1 sends A to process0
- AA-1
- If A0 print, otherwise goto 1