Virtual Topologies - PowerPoint PPT Presentation

About This Presentation
Title:

Virtual Topologies

Description:

... to match the requested cartesian grid size if possible; otherwise error results. ... grid size does not match what is in old_comm. Self Test ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 21
Provided by: ProjectA7
Category:

less

Transcript and Presenter's Notes

Title: Virtual Topologies


1
Virtual Topologies
  • Self Test with solution

2
Self Test
  • When using MPI_Cart_create, if the cartesian grid
    size is smaller than processes available in
    old_comm, then
  • error results.
  • new_comm returns MPI_COMM_NULL for calling
    processes not used for grid.
  • new_comm returns MPI_UNDEFINED for calling
    processes not used for grid.

3
Self Test
  • When using MPI_Cart_create, if the cartesian grid
    size is larger than processes available in
    old_comm, then
  • error results.
  • the cartesian grid is automatically reduced to
    match processes available in old_comm.
  • more processes are added to match the requested
    cartesian grid size if possible otherwise error
    results.

4
Self Test
  • After using MPI_Cart_create to generate a
    cartesian grid with grid size smaller than
    processes available in old_comm, a call to
    MPI_Cart_coords or MPI_Cart_rank
    unconditionally(i.e., without regard to whether
    it is appropriate to call) ends in error because
  • calling processes not belonging to group have
    been assigned the communicator MPI_UNDEFINED,
    which is not a valid communicator for
    MPI_Cart_coords or MPI_Cart_rank.
  • calling processes not belonging to group have
    been assigned the communicator MPI_COMM_NULL,
    which is not a valid communicator for
    MPI_Cart_coords or MPI_Cart_rank.
  • grid size does not match what is in old_comm.

5
Self Test
  • When using MPI_Cart_rank to translate cartesian
    coordinates into equivalent rank, if some or all
    of the indices of the coordinates are outside of
    the defined range, then
  • error results.
  • error results unless periodicity is imposed in
    all dimensions.
  • error results unless each of the out-of-range
    indices is periodic.

6
Self Test
  • With MPI_Cart_shift(comm, direction, displ,
    source, dest), if the calling process is the
    first or the last entry along the shift direction
    and that displ is greater than 0, then
  • error results.
  • MPI_Cart_shift returns source and dest if
    periodicity is imposed along the shift direction.
    Otherwise, source and/or dest return
    MPI_UNDEFINED.
  • error results unless periodicity is imposed along
    the shift direction.

7
Self Test
  • MPI_Cart_sub can be used to subdivide a cartesian
    grid into subgrids of lower dimensions. These
    subgrids
  • have dimensions one lower than the original grid.
  • attributes such as periodicity must be reimposed.
  • possess appropriate attributes of the original
    cartesian grid.

8
Answer
  • B
  • A
  • B
  • C
  • B
  • C

9
Course Problem
  • Description
  • The new problem still implements a parallel
    search of an integer array. The program should
    find all occurrences of a certain integer which
    will be called the target. When a processor of a
    certain rank finds a target location, it should
    then calculate the average of
  • The target value
  • An element from the processor with rank one
    higher (the "right" processor). The right
    processor should send the first element from its
    local array.
  • An element from the processor with rank one less
    (the "left" processor). The left processor should
    send the first element from its local array.

10
Course Problem
  • For example, if processor 1 finds the target at
    index 33 in its local array, it should get from
    processors 0 (left) and 2 (right) the first
    element of their local arrays. These three
    numbers should then be averaged.
  • In terms of right and left neighbors, you should
    visualize the four processors connected in a
    ring. That is, the left neighbor for P0 should be
    P3, and the right neighbor for P3 should be P0.
  • Both the target location and the average should
    be written to an output file. As usual, the
    program should read both the target value and all
    the array elements from an input file.

11
Course Problem
  • Exercise
  • Modify your code from Chapter 7 to solve this
    latest version of the Course Problem using a
    virtual topology. First, create the topology
    (which should be called MPI_RING) in which the
    four processors are connected in a ring. Then,
    use the utility routines to determine which
    neighbors a given processor has.

12
Solution
  • Note The sections of code shown in red are new
    code in which the MPI_RING virtual topology is
    created. The section of code in blue is where the
    new topology is used by each processor to
    determine its left and right neighbors.

13
Solution
  • include ltstdio.hgt
  • include ltmpi.hgt
  • define N 300
  • int main(int argc, char argv)
  • int i, target /local variables/
  • int bN, aN/4 /a is name of the array each
    slave searches/
  • int rank, size, err
  • MPI_Status status
  • int end_cnt
  • FILE sourceFile
  • FILE destinationFile
  • int left, right /the left and right
    processes/
  • int lx, rx /store the left and right
    elements/
  • int gi /global index/

14
Solution
  • int blocklengths2 1, 1 / initialize
    blocklengths array /
  • MPI_Datatype types2 MPI_INT, MPI_FLOAT /
    initialize types array /
  • MPI_Datatype MPI_Pair
  • MPI_Aint displacements2
  • MPI_Comm MPI_RING / Name of the new cartesian
    topology /
  • int dim1 / Number of dimensions /
  • int period1, reorder / Logical array to
    control if the dimension should "wrap-around" /
  • int coord1 / Coordinate of the processor in
    the new ring topology /
  • err MPI_Init(argc, argv)
  • err MPI_Comm_rank(MPI_COMM_WORLD, rank)
  • err MPI_Comm_size(MPI_COMM_WORLD, size)
  • / Initialize displacements array with memory
    addresses /
  • err MPI_Address(gi, displacements0)
  • err MPI_Address(ave, displacements1)
  • / This routine creates the new data type
    MPI_Pair /
  • err MPI_Type_struct(2, blocklengths,
    displacements, types, MPI_Pair)

15
Solution
  • if(size ! 4)
  • printf("Error You must use 4 processes to run
    this program.\n")
  • return 1
  • dim0 4 / Four processors in the one row
    /
  • period0 1 / Have the row "wrap-around" to
    make a ring /
  • reorder 1
  • / Create the the new ring cartesian topology
    with a call to the following routine /
  • err MPI_Cart_create(MPI_COMM_WORLD, 1, dim,
    period, reorder, MPI_RING)
  • if (rank 0)
  • / File b.data has the target value on the
    first line /
  • / The remaining 300 lines of b.data have the
    values for the b array /
  • sourceFile fopen("b.data", "r")
  • / File found.data will contain the indices of
    b where the target is /
  • destinationFile fopen("found.data", "w")

16
Solution
  • if(sourceFileNULL)
  • printf("Error can't access file.c.\n")
  • return 1
  • else if(destinationFileNULL)
  • printf("Error can't create file for
    writing.\n")
  • return 1
  • else
  • / Read in the target /
  • fscanf(sourceFile, "d", target)
  • /Notice the broadcast is outside of the if, all
    processors must call it/
  • err MPI_Bcast(target, 1, MPI_INT, 0,
    MPI_COMM_WORLD)
  • if (rank 0)
  • / Read in b array /
  • for (i0 iltN i)
  • fscanf(sourceFile,"d", bi)

17
Solution
  • / Again, the scatter is after the if, all
    processors must call it /
  • err MPI_Scatter(b, N/size, MPI_INT, a, N/size,
    MPI_INT, 0, MPI_COMM_WORLD)
  • / Each processor easily determines its left and
    right neighbors /
  • / with the call to the following utility
    routine /
  • err MPI_Cart_shift(MPI_RING, 0, 1, left,
    right)
  • if (rank 0)
  • / P0 sends the first element of its subarray a
    to its neighbors /
  • err MPI_Send(a0, 1, MPI_INT, left, 33,
    MPI_COMM_WORLD)
  • err MPI_Send(a0, 1, MPI_INT, right, 33,
    MPI_COMM_WORLD)
  • / P0 gets the first elements of its left and
    right processor's arrays /
  • err MPI_Recv(lx, 1, MPI_INT, left, 33,
    MPI_COMM_WORLD, status)
  • err MPI_Recv(rx, 1, MPI_INT, right, 33,
    MPI_COMM_WORLD, status)
  • /Master now searches the first fourth of the
    array for the target /
  • for (i0 iltN/size i)
  • if (ai target)

18
Solution
  • end_cnt 0
  • while (end_cnt ! 3)
  • err MPI_Recv(MPI_BOTTOM, 1, MPI_Pair,
    MPI_ANY_SOURCE, MPI_ANY_TAG, MPI_COMM_WORLD,
    status)
  • if (status.MPI_TAG 52)
  • end_cnt /See Comment/
  • else
  • fprintf(destinationFile,"P d, d f\n",
    status.MPI_SOURCE, gi, ave)
  • fclose(sourceFile)
  • fclose(destinationFile)
  • else
  • / Each slave sends the first element of its
    subarray a to its neighbors /
  • err MPI_Send(a0, 1, MPI_INT, left, 33,
    MPI_COMM_WORLD)
  • err MPI_Send(a0, 1, MPI_INT, right, 33,
    MPI_COMM_WORLD)
  • / Each slave gets the first elements of its
    left and right processor's arrays /

19
Solution
  • / Search the b array and output the target
    locations /
  • for (i0 iltN/size i)
  • if (ai target)
  • gi (rank)N/sizei1 /Equation to convert
    local index to global index/
  • ave (targetlxrx)/3.0
  • err MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0,
    19, MPI_COMM_WORLD)
  • gi target / Both are fake values /
  • ave3.45 / The point of this send is the
    "end" tag (See Chapter 4) /
  • err MPI_Send(MPI_BOTTOM, 1, MPI_Pair, 0, 52,
    MPI_COMM_WORLD) /See Comment/
  • err MPI_Type_free(MPI_Pair)
  • err MPI_Finalize()
  • return 0

20
Solution
  • The results obtained from running this code are
    in the file "found.data" which contains the
    following
  • P 0, 62, -7.666667
  • P 2, 183, -7.666667
  • P 3, 271, 19.666666
  • P 3, 291, 19.666666
  • P 3, 296, 19.666666
  • Notice that in this new version of the code we
    obtained the same results as the stencil version,
    which we should have.
  • If you want to confirm that these results are
    correct, run the parallel code shown above using
    the input file "b.data" from Chapter 2.
Write a Comment
User Comments (0)
About PowerShow.com