An Introduction to Parallel Programming with MPI - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

An Introduction to Parallel Programming with MPI

Description:

On return, the contents of the root processor's BUFFER have been copied to all processes ... the root) sends the contents of its send buffer to the root process. ... – PowerPoint PPT presentation

Number of Views:66
Avg rating:3.0/5.0
Slides: 17
Provided by: davidb122
Category:

less

Transcript and Presenter's Notes

Title: An Introduction to Parallel Programming with MPI


1
An Introduction to Parallel Programming with MPI
  • March 22, 24, 29, 31
  • 2005
  • David Adams
  • daadams3_at_vt.edu
  • http//research.cs.vt.edu/lasca/schedule

2
Outline
  • Disclaimers
  • Overview of basic parallel programming on a
    cluster with the goals of MPI
  • Batch system interaction
  • Startup procedures
  • Quick review
  • Blocking message passing
  • Non-blocking message passing
  • Collective communications

3
Review
  • Functions we have covered in detail
  • MPI_INIT MPI_FINALIZE
  • MPI_COMM_SIZE MPI_COMM_RANK
  • MPI_SEND MPI_RECV
  • MPI_ISEND MPI_IRECV
  • MPI_WAIT MPI_TEST
  • Useful constants
  • MPI_COMM_WORLD MPI_ANY_SOURCE
  • MPI_ANY_TAG MPI_SUCCESS
  • MPI_REQUEST_NULL MPI_TAG_UB

4
Collective Communications
  • Transmit data to all processes within a
    communicator domain. (All processes in
    MPI_COMM_WORLD for example.)
  • Called by every member of a communicator but can
    not be relied on to synchronize the processes
    (except MPI_BARRIER).
  • Come only in blocking versions and standard mode
    semantics.
  • Collective communications are SLOW but are a
    convenient way of passing the optimization of
    data transfer to the vendor instead of the end
    user.
  • Everything accomplished with collective
    communications could also be done using the
    functions we have already gone over. They are
    simply shortcuts and implementer optimizations
    for communication patterns that are used often by
    parallel programmers.

5
BARRIER
  • MPI_BARRIER(COMM, IERROR)
  • IN INTEGER COMM
  • OUT IERROR
  • Blocks the caller until all processes in the
    group have entered the call to MPI_BARRIER.
  • Allows for process synchronization and is the
    only collective operation that guarantees
    synchronization at the call even though others
    could synchronize as a side effect.

6
Broadcast
  • MPI_BCAST(BUFFER, COUNT, DATATYPE, ROOT, COMM,
    IERROR)
  • INOUT lttypegt BUFFER()
  • IN INTEGER COUNT, DATATYPE, ROOT, COMM
  • OUT IERROR
  • Broadcasts a message from the process with rank
    root to all processes of the communicator group.
  • Serves as both the blocking send and blocking
    receive for message completion and must be called
    by every processor in the communicator group.
  • Conceptually, this can be viewed as sending a
    single message from root to every processor in
    the group but MPI implementations are free to
    make this more efficient.
  • On return, the contents of the root processors
    BUFFER have been copied to all processes

7
Broadcast
Data ?
Data ?
A0
A0
A0
? Processes
? Processes
A0
A0
A0
A0
8
Gather
  • MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF,
    RECVCOUNT, RECVTYPE, COMM, IERROR)
  • OUT lttypegt RECVBUF()
  • IN lttypegt SENDBUF()
  • IN INTEGER SENDCOUNT, RECVCOUNT, SENDTYPE,
    RECVTYPE, COMM
  • OUT IERROR
  • Each process (including the root) sends the
    contents of its send buffer to the root process.
  • The root process collects the messages in rank
    order and stores them in the RECVBUF.
  • If there are n processes in the communicator
    group then the RECVBUF must be n times larger
    than the SENDBUF.
  • RECVCOUNT SENDCOUNT, meaning that the function
    is looking for the count of objects of type
    RECVTYPE that it is receiving from each process.

9
Gather
Data ?
Data ?
A0
A0
A1
A2
A3
A4
A5
A1
? Processes
? Processes
A2
A3
A4
A5
10
Scatter
  • MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE,
    RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR)
  • OUT lttypegt RECVBUF()
  • IN lttypegt SENDBUF()
  • IN INTEGER SENDCOUNT, RECVCOUNT, SENDTYPE,
    RECVTYPE, COMM
  • OUT IERROR
  • MPI_SCATTER is the inverse of MPI_GATHER.
  • The outcome of this function is for root to take
    its SENDBUF and split it into n equal segments, 0
    through (n-1), where the ith segment is delivered
    to the ith process in the group.

11
Scatter
Data ?
Data ?
A0
A1
A2
A3
A4
A5
A0
A1
? Processes
? Processes
A2
A3
A4
A5
12
ALLGATHER
Data ?
Data ?
A0
A0
B0
C0
D0
E0
F0
B0
A0
B0
C0
D0
E0
F0
? Processes
? Processes
C0
A0
B0
C0
D0
E0
F0
D0
A0
B0
C0
D0
E0
F0
E0
A0
B0
C0
D0
E0
F0
F0
A0
B0
C0
D0
E0
F0
13
ALLTOALL
Data ?
Data ?
A0
A1
A2
A3
A4
A5
A0
B0
C0
D0
E0
F0
B0
B1
B2
B3
B4
B5
A1
B1
C1
D1
E1
F1
? Processes
? Processes
C0
C1
C2
C3
C4
C5
A2
B2
C2
D2
E2
F2
D0
D1
D2
D3
D4
D5
A3
B3
C3
D3
E3
F3
E0
E1
E2
E3
E4
E5
A4
B4
C4
D4
E4
F4
F0
F1
F2
F3
F4
F5
A5
B5
C5
D5
E5
F5
14
Global Reductions
  • MPI can perform a global reduction operation
    across all members of a communicator group.
  • Reduction operations include operations like
  • Maximum
  • Minimum
  • Sum
  • Product
  • ANDs and ORs

15
MPI_REDUCE
  • MPI_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP,
    ROOT, COMM, IERROR)
  • OUT lttypegt RECVBUF()
  • IN lttypegt SENDBUF()
  • IN INTEGER COUNT, DATATYPE, OP, ROOT, COMM
  • OUT IERROR
  • Combines the elements provided in the input
    buffer of each process in the group, using the
    operation OP, and returns the combined value in
    the output buffer of the process with rank ROOT.
  • Predefined operations include
  • MPI_MAX MPI_MIN MPI_SUM
  • MPI_PROD MPI_LAND MPI_BAND
  • MPI_LOR MPI_BOR MPI_LXOR
  • MPI_BXOR

16
Helpful Online Information
  • Man pages for MPI
  • http//www-unix.mcs.anl.gov/mpi/www/
  • MPI homepage at Argonne National Lab
  • http//www-unix.mcs.anl.gov/mpi/
  • Some more sample programs
  • http//www-unix.mcs.anl.gov/mpi/usingmpi/examples/
    main.htm
  • Other helpful books
  • http//fawlty.cs.usfca.edu/mpi/
  • http//mitpress.mit.edu/catalog/item/default.asp?t
    type2tid3614
  • Some helpful UNIX commands
  • http//www.ee.surrey.ac.uk/Teaching/Unix/
Write a Comment
User Comments (0)
About PowerShow.com