Advanced MPI1 and MPI2 - PowerPoint PPT Presentation

1 / 14
About This Presentation
Title:

Advanced MPI1 and MPI2

Description:

User Defined Data Types ... Define a window', which can be accessed by remote tasks, then use: ... Each task may define its own view of a file ... – PowerPoint PPT presentation

Number of Views:43
Avg rating:3.0/5.0
Slides: 15
Provided by: ealb
Category:
Tags: advanced | define | mpi1 | mpi2

less

Transcript and Presenter's Notes

Title: Advanced MPI1 and MPI2


1
Advanced MPI-1 and MPI-2
  • John Urbanic
  • urbanic_at_psc.edu

2
  • In the MPI Basics talk we have only touched upon
    the 120 MPI-1 routines and didnt use any MPI-2.
    We wont cover everything here, but after this
    course you should be able to peruse the man pages
    quite easily. However, let us at least discuss
    the general areas of functionality that these
    bring to us.

3
MPI-1 Advanced Routines
  • Communicators
  • User Defined Data Types

4
Communicators
  • This example from the MPI Standard illustrates
    how a group consisting of all but the zeroth
    process of the all'' group is created, and then
    how a communicator is formed ( commslave) for
    that new group. The new communicator is used in a
    collective call, and all processes execute a
    collective call in the MPI_COMM_WORLD context.
    This example illustrates how the two
    communicators (that inherently possess distinct
    contexts) protect communication. That is,
    communication in MPI_COMM_WORLD is insulated from
    communication in commslave, and vice versa.

5
Communicators
  • main(int argc, char argv)
  • int me, count, count2
  • void send_buf, recv_buf, send_buf2,
    recv_buf2
  • MPI_Group MPI_GROUP_WORLD, grprem
  • MPI_Comm commslave
  • static int ranks 0
  • MPI_Init(argc, argv)
  • MPI_Comm_group(MPI_COMM_WORLD,
    MPI_GROUP_WORLD)
  • MPI_Comm_rank(MPI_COMM_WORLD, me)
  • MPI_Group_excl(MPI_GROUP_WORLD, 1, ranks,
    grprem)
  • MPI_Comm_create(MPI_COMM_WORLD, grprem,
    commslave)
  • if(me ! 0) / compute on slave /
  • MPI_Reduce(send_buf,recv_buff,count,
    MPI_INT, MPI_SUM, 1, commslave)
  • / zero falls through immediately to this
    reduce, others do later... /
  • MPI_Reduce(send_buf2, recv_buff2, count2,
    MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD)

6
User Defined Data Types
User defined data types can be both
organizationally useful as well as almost
a necessity when dealing with certain data
structures. For example, by now you know how to
send a specific byte or number from an array.
You can also use the number of items parameter to
send a consecutive row of such numbers. But,
to send a column (actually, the terms column
and row here vary depending on whether we are
using C or Fortran due to row or column major
ordering a detail that we will deal with in
our examples) we would have to resort to
skipping around the array manually. Or, we can
define a column type that has a stride built in.
Likewise, we can define array sub-blocks or more
esoteric structures. User defined types are
fairly flexible because of a number of MPI
Routines to allow these definitions. However,
the first time through you may have to
pay attention to the exact procedure.
7
User Defined Data Types
  • Example - Transpose a matrix

REAL a(100,100), b(100,100) INTEGER row, xpose,
sizeofreal, myrank, ierr INTEGER
status(MPI_STATUS_SIZE) CALL MPI_COMM_RANK(MPI_C
OMM_WORLD, myrank) CALL MPI_TYPE_EXTENT(
MPI_REAL, sizeofreal, ierr) C create
datatype for one row CALL MPI_TYPE_VECTOR( 100,
1, 100, MPI_REAL, row, ierr) C create
datatype for matrix in row-major order CALL
MPI_TYPE_HVECTOR( 100, 1, sizeofreal, row, xpose,
ierr) CALL MPI_TYPE_COMMIT( xpose, ierr) C
send matrix in row-major order and receive in
column major order CALL MPI_SENDRECV( a, 1,
xpose, myrank, 0, b, 100100, MPI_REAL, myrank,
0, MPI_COMM_WORLD, status, ierr)
8
Contents of MPI-2
  • One-sided communication (put / get)
  • Dynamic process management
  • Parallel I/O (MPI-IO)
  • Miscellaneous
  • Extended collective communication operations
  • C interface
  • limited F90 support
  • language interoperability
  • Thread support

9
Not Really Implemented (Yet)
  • Few full MPI-2 implementations
  • Fujitsu
  • Hitachi
  • NEC
  • Some contention even here.
  • Most just have pieces (Put/Get, MPIO)

10
One-sided Communication
  • Define a window, which can be accessed by
    remote tasks, then use 
  • MPI_Put( origin_addr, origin_count,
    origin_datatype, target_addr, target_count,target_
    datatype, window )
  • MPI_Get( ... )
  • MPI_Accumulate( ..., op, ... )
  • op is as in MPI_Reduce, but no user-defined
    operations are allowed.

11
One-sided Communication
  • These can be very useful and efficient routines.
  • They are similar to the shmem message
  • Passing interface offered on several high
  • Performance platforms. You may even optimize
  • an MPI-1 code to use these in a fairly painless
  • manner.

12
Dynamic Process Management
  • MPI_Comm_spawn creates a new group of tasks and
    returns and intercommunicator
  • MPI_Comm_spawn(command, argv, numprocs, info,
    root, comm, intercomm, errcodes)
  • Tries to start numprocs process running command,
    passing them command-line arguments argv.
  • The operation is collective over comm.
  • Spawnees are in remote group of intercomm.
  • Errors are reported on a per-process basis in
    errcodes.
  • info used to optionally specify hostname,
    archname, wdir, path, file.

13
Dynamic Process Management
  • This can seem tempting, but as most MPPs require
    you to run on a specific number of PEs, adding
    tasks can often cause load balance problems if
    it is even permitted. For optimized scientific
    codes, this is rarely a useful approach.

14
MPI-IO
  • Portable interface to parallel I/O, part of MPI 2
    standard
  • Many tasks can access the same file in parallel,
  • Each task may define its own view of a file
  • I/O in MPI can be considered as Unix I/O plus
    (lots of) other stuff.
  • Basic operations MPI_File_open, close, read,
    write, seek
  • Parameters to these operations match Unix, aiding
    straightforward port from Unix I/O to MPI I/O.
  • However, to get performance and portability, more
    advanced features MUST be used.
Write a Comment
User Comments (0)
About PowerShow.com