Basic MPI - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

Basic MPI

Description:

... and communicator management. Collective communication. Point to ... Vendors may provide more functions for enhanced flexibility. Information Technology Services ... – PowerPoint PPT presentation

Number of Views:70
Avg rating:3.0/5.0
Slides: 42
Provided by: ley6
Category:

less

Transcript and Presenter's Notes

Title: Basic MPI


1
Basic MPI
  • Le Yan
  • Jan 25, 2007

2
Outline
  • Introduction what is MPI and why MPI?
  • Basic MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

3
Outline
  • Introduction what is MPI and why MPI?
  • Basic MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

4
Message passing
  • Each process has its own exclusive address space
    and data to be shared must be explicitly
    transferred from one to another.
  • Message-passing involves two processes one
    sending the data and one receiving it.
  • Most message-passing programs use the single
    program multiple data (SPMD) model.

5
MPI Message Passing Interface
  • MPI defines a standard library for
    message-passing
  • MPI supports both C and Fortran languages.
  • The MPI standard defines both the syntax as well
    as the semantics of a core set of library
    routines.
  • A set of 125 functions with 6 of them being the
    core functions
  • Vendors may provide more functions for enhanced
    flexibility

6
Why MPI?
  • One of the oldest libraries
  • Vendor implementations are available on almost
    all commercial parallel computers.
  • Minimal requirement on the underlying hardware
  • Explicit parallelization
  • Efficient
  • Scales to large number of processors

7
Outline
  • Introduction what is MPI and why MPI?
  • Basic MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

8
MPI Subroutines
  • Environment and communicator management
    subroutines
  • Initialization and termination
  • Communicator setup
  • Collective communication subroutines
  • Message transfer involving all processes in a
    communicator
  • Point-to-point communication subroutines
  • Message transfer from one process to another

9
Communicators
  • A communicator is an identifier associated with a
    group of processes.
  • The communicator MPI_COMM_WORLD defined in the
    MPI header file is the group that contains all
    processes.
  • Different communicators can coexist.
  • A process can belong to different communicators,
    but has a unique rank (ID) in each of the
    processes it belongs to.

10
Outline
  • Introduction what is MPI and why MPI?
  • MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

11
A sample MPI program
... include mpif.h ... call mpi_initialize(ierr) .
.. call mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) ... call
mpi_finalize(ierr) ...
12
Header file
... include mpif.h ... call mpi_initialize(ierr) .
.. call mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) ... call
mpi_finalize(ierr)
13
Initialization
... include mpif.h ... call mpi_initialize(ierr) .
.. call mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) ... call
mpi_finalize(ierr)
14
Termination
... include mpif.h ... call mpi_initialize(ierr) .
.. call mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) ... call
mpi_finalize(ierr)
15
Communicator size
... include mpif.h ... call mpi_initialize(ierr) .
.. call mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) ... call
mpi_finalize(ierr)
16
Process rank
... include mpif.h ... call mpi_initialize(ierr) .
.. call mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) ... call
mpi_finalize(ierr)
17
Example
include mpif.h call mpi_initialize(ierr) call
mpi_comm_size(comm,size,ierr) call
mpi_comm_rank(comm,rank,ierr) if (rank.eq.0)
then print(,) 'I am the root' print(,)
'My rank is',rank else print(,) 'I am not the
root' print(,) 'My rank
is',rank endif call mpi_finalize(ierr)
Output (assume 3 processes) I am not the
root My rank is 2 I am the root My rank is 0 I am
not the root My rank is 1
18
Outline
  • Introduction what is MPI and why MPI?
  • Basic MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

19
Collective communication
  • Collective communications allow us to exchange
    data between the processes that belong to the
    specified communicator.
  • There are three types of collective
    communications
  • Data movement
  • Examples mpi_bcast, mpi_gather, mpi_allgather
  • Reduction (computation)
  • Examples mpi_reduce, mpi_allreduce
  • Synchronization
  • Examples mpi_barrier

20
Collective communication
  • Collective communications allow us to exchange
    data between the processes that belong to the
    specified communicator.
  • There are three types of collective
    communications
  • Data movement
  • Examples mpi_bcast, mpi_gather, mpi_allgather
  • Reduction (computation)
  • Examples mpi_reduce, mpi_allreduce
  • Synchronization
  • Examples mpi_barrier

21
Broadcast
  • Send a message from a process (called root) to
    all other processes in the same communicator.
  • Syntax
  • Fortran mpi_bcast
  • C MPI_Bcast
  • C MPICommBcast

22
Example
PROGRAM bcast INCLUDE mpif.h INTEGER
imsg(4) CALL MPI_INIT(ierr) CALL
MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr)
IF (myrank0) THEN DO i1,4 imsg(i)
i ENDDO ELSE DO i1,4 imsg(i)
0 ENDDO ENDIF PRINT ,Before,imsg
CALL MP_FLUSH(1) CALL MPI_BCAST(imsg, 4,
MPI_INTEGER, 0,
MPI_COMM_WORLD, ierr) PRINT ,After ,imsg
CALL MPI_FINALIZE(ierr) END
Output 0 Before 1 2 3 4 1 Before 0 0 0 0 2
Before 0 0 0 0 0 After 1 2 3 4 1 After 1 2
3 4 2 After 1 2 3 4
23
Gather
  • Collects individual messages from processes in
    the communicator to the root process.
  • Syntax
  • Fortran mpi_gather
  • C MPI_Gather
  • C MPICommGather

24
Example
PROGRAM gather INCLUDE mpif.h INTEGER
irecv(3) CALL MPI_INIT(ierr) CALL
MPI_COMM_SIZE(MPI_COMM_WORLD, nprocs, ierr)
CALL MPI_COMM_RANK(MPI_COMM_WORLD, myrank, ierr)
isend myrank 1 CALL MPI_GATHER(isend, 1,
MPI_INTEGER, irecv, 1,
MPI_INTEGER, 0,
MPI_COMM_WORLD, ierr) IF (myrank0) THEN
PRINT ,irecv ,irecv ENDIF CALL
MPI_FINALIZE(ierr) END
Output 0 irecv 1 2 3
25
Reduction
  • Applies a reduction operation to the vector
    sendbuf over the set of processes specified by
    comm and places the result in recvbuf on root.
  • Syntax
  • Fortran mpi_reduce
  • C MPI_Reduce
  • C MPICommReduce

26
Reduction operations
  • Summation and production
  • Maximum and minimum
  • Max and min location
  • Logical (AND OR XOR)
  • Bitwise (AND OR XOR)
  • User defined
  • Subroutine mpi_op_create

27
Examples
PROGRAM reduce INCLUDE mpif.h REAL a(9) CALL
MPI_INIT(ierr) CALL MPI_COMM_SIZE(MPI_COMM_WORLD,
nprocs, ierr) CALL MPI_COMM_RANK(MPI_COMM_WORLD,
myrank, ierr) ista myrank 3 1 iend ista
2 DO iista,iend a(i) i ENDDO sum 0.0 DO
iista,iend sum sum a(i) ENDDO CALL
MPI_REDUCE(sum, tmp, 1, MPI_REAL, MPI_SUM, 0,
MPI_COMM_WORLD, ierr) sum tmp IF
(myrank0) THEN PRINT ,sum ,sum ENDIF CALL
MPI_FINALIZE(ierr) END
Output 0 sum 45.00000000
28
Some other collective communication
29
Synchronization
  • Blocks each process in the communicator until all
    processes have called it.
  • Fortran mpi_barrier
  • C MPI_Barrier
  • C MPICommBarrier

30
Outline
  • Introduction what is MPI and why MPI?
  • Basic MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

31
Point-to-point communication
  • Process to process communication
  • More flexible compared to collective
    communications, but less efficient
  • There are two types of point-to-point
    communication.
  • Blocking
  • Non-blocking
  • All collective communications are blocking.

32
Basic concept (buffered)
Step 1
Step 2
Step 3
33
Blocking
  • The call will not return until the data transfer
    process is actually over.
  • The sending process will wait until all data are
    transferred from the send buffer to the system
    buffer.
  • The receiving process will wait until all data
    are transferred from the system buffer to the
    receive buffer

34
Non-blocking
  • Returns immediately after the data transfer is
    initiated.
  • Faster than blocking procedures
  • Could cause problems
  • Send and receive buffers are updated before data
    transfer is over.

35
Subroutines
36
Examples
  • Blocking send and receive

IF (myrank0) THEN CALL MPI_SEND(sendbuf,count
,datatype,destination,tag,comm,ierror) ELSEIF
(myrank1) THEN CALL MPI_RECV(recvbuf,count,da
tatype,source,tag,comm,ierror) ENDIF
  • Non-blocking send and receive

IF (myrank0) THEN CALL MPI_ISEND(sendbuf,coun
t,datatype,destination,tag,comm,ierror) ELSEIF
(myrank1) THEN CALL MPI_IRECV(recvbuf,count,d
atatype,source,tag,comm,ierror) ENDIF CALL
MPI_WAIT(ireq,istatus,ierror)
37
Data exchange between 2 procs
IF (myrank0) THEN CALL MPI_SEND(sendbuf,...)
CALL MPI_RECV(recvbuf,...) ELSEIF (myrank1)
THEN CALL MPI_SEND(sendbuf,...) CALL
MPI_RECV(recvbuf,...) ENDIF
  • Dead locks
  • If the system buffer is smaller than the data to
    be transferred, then both processes cannot reach
    the MPI_RECV because the action of MPI_SEND
    cannot be finished as all system buffer is filled
    up by the data to be transferred.

38
Data exchange between 2 procs
IF (myrank0) THEN CALL MPI_ISEND(sendbuf,...)
CALL MPI_RECV(recvbuf,...) CALL
MPI_WAIT(ireq,...) ELSEIF (myrank1) THEN
CALL MPI_ISEND(sendbuf,...) CALL
MPI_RECV(recvbuf,...) CALL MPI_WAIT(ireq,...) E
NDIF
  • Deadlock-free code
  • Both processes return immediately from MPI_ISEND
    and start receiving data.

39
Outline
  • Introduction what is MPI and why MPI?
  • Basic MPI subroutines
  • Environment and communicator management
  • Collective communication
  • Point to point communication
  • Compile and run MPI codes

40
Compiling and running MPI codes
  • Linux systems (Supermike)
  • Compile mpif90 test.f -o test
  • Run Through PBS
  • See http//www.hpc.lsu.edu/help/linuxguide.php
  • AIX systems (Pelican and LONI machines)
  • Compile mpxlf test.f -o test
  • Run
  • Interactive poe test -nodes1 -taskspernode2
    -rmpool1
  • Through LoadLeveler
  • See http//www.hpc.lsu.edu/help/pelicanguide.php

41
References
  • Internet
  • http//www.mpi-forum.org
  • http//www.mcs.anl.gov/mpi
  • HPC website (ibm documentations)
    http//appl006.lsu.edu/ocsweb/hpchome.nsf/Content
    /document?OpenDocument
  • Books
  • Using MPI, by W. Gropp, E. Lusk and A. Skjellum
  • Using MPI-2, by W. Gropp, E. Lusk and A. Skjellum
  • Parallel programming with MPI, by P. Pacheco
Write a Comment
User Comments (0)
About PowerShow.com