GridSuperscalar: a programming paradigm for GRID applications - PowerPoint PPT Presentation

1 / 24
About This Presentation
Title:

GridSuperscalar: a programming paradigm for GRID applications

Description:

Temporal directory and input files are erased ... Each is a Data Flow Graph (DFG), each node is a NPB instance (BT, SP, LU, MG or FT) ... – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 25
Provided by: sergig
Category:

less

Transcript and Presenter's Notes

Title: GridSuperscalar: a programming paradigm for GRID applications


1
GridSuperscalar a programming paradigm for GRID
applications
  • Rosa M. Badia, Jesús Labarta, Raül Sirvent, José
    M. Cela, and Rogeli Grima CEPBA-IBM Research
    Institute
  • rosab_at_ciri.upc.es

2
Outline
  • Motivation basic idea
  • GridSuperscalar overview
  • GridSuperscalar features
  • Globus implementation
  • Code Example
  • Implemented applications results

3
Motivation
  • Motivation reduce the complexity of developing
    Grid Applications to the minimum
  • Basic idea superscalar processors
  • Simple programming language Machine language
  • Sequential control flow
  • Well defined object name space, I/0 arguments to
    operation
  • Automatic construction of precedence DAG
  • Renaming
  • Forwarding
  • DAG scheduling, locality management
  • Prediction
  • Speculation

4
GridSuperscalar basis
  • Code sequential application in C with calls
    the GridSuperscalar run-time (Execute)
  • Run-time performs
  • Task identification (based on Execute primitive)
  • Data dependency analysis files are the objects
  • Data dependence graph creation
  • Task scheduling based on the graph
  • File renaming to increase graph concurrency
  • File forwarding

5
GridSuperscalar behavior overview
Application code
  • initialization()
  • for (i0 iltN i)
  • Execute (T1, file1.txt, file2.txt)
  • Execute (T2, file4.txt, file5.txt)
  • Execute (T3, file2.txt, file5.txt,
    file6.txt)
  • Execute (T4, file7.txt, file8.txt)
  • Execute (T5, file6.txt, file8.txt,
    file9.txt)

6
GridSuperscalar user interface
  • Actions to do when developing an application
  • Task definition identify those
    subroutines/programs to be executed in the Grid
  • Tasks interface definition input/output files
    and input/output generic scalars
  • Write the sequential program using calls to the
    GridSuperscalar primitives (Execute)
  • Application clearly decomposed in two parts
  • Main program with calls to Execute
  • Worker, which implements the tasks

Instruction set definition
7
GridSuperscalar run-time task graph generation
  • Range initial_range()
  • while (!goal_reached() (jltMAX_ITERS))
  • for (i0 iltITERS i)
  • Li gen_rand_L_within_current_range(range)
  • BWi gen_rand_BW_within_current_range(range)
  • Execute (FILTER, bh.cfg, Li, BWi,
    bh_tmp.cfg)
  • Execute (DIMEMAS, bh_tmp.cfg,trace.trf,
    dim_out.txt)
  • Execute (EXTRACT, dim_out.txt,
    final_result.txt)
  • GS_Barrier()
  • generate_new_range(final_result.txt, range)
  • j

N
FILTER

DIMEMAS
EXTRACT
BARRIER

8
GridSuperscalar task scheduling
FILTER
FILTER
FILTER

DIMEMAS
DIMEMAS
DIMEMAS
EXTRACT
EXTRACT
EXTRACT
BARRIER

CIRI Grid
9
GridSuperscalar task scheduling
FILTER
FILTER
FILTER

DIMEMAS
DIMEMAS
DIMEMAS
EXTRACT
EXTRACT
EXTRACT
BARRIER

CIRI Grid
10
GridSuperscalar file renaming
T1_1
T2_1
TN_1
f1_2
f1_1
f1
f1
f1

T1_2
T2_2
TN_2
T1_3
T2_3
TN_3
  • Types of data dependencies RaW, WaW, WaR
  • WaW and WaR dependencies are avoidable with
    renaming

11
GridSuperscalar file forwarding
T1
T1
f1 (by socket)
f1
T2
T2
  • File forwarding reduces the impact of RaW data
    dependencies

12
Grid superscalar current Globus implementation
  • Previous prototype over Condor and MW
  • Current prototype over Globus 2.x, using the API
  • File transfer, security, provided by Globus
  • Run-time implemented primitives
  • GS_on, GS_off
  • Execute
  • GS_open, GS_close
  • GS_Barrier
  • Worker side GS_System

13
Grid superscalar current Globus implementation
  • File management and task submission
  • Temporal directory for each task in destination
    machine
  • All the files of a task sent from the starting to
    destination machine temporal directory
  • Output files sent back to the starting machine
  • Temporal directory and input files are erased
  • Handled by stagein, stageout and
    scratch_directory of RSL
  • Task submission
  • globus_gram_client_job_request

14
Grid superscalar current Globus implementation
  • Task scheduling
  • Tasks submitted as soon as possible
  • Data dependencies resolved
  • Hardware resources available
  • Callback function marks the tasks that have
    finished
  • Data structures update inside next Execute
    function
  • Asynchronous end of task synchronization
  • Asynchronous state-change callbacks mechanism
    provided by Globus
  • globus_gram_client_callback_allow
  • callback_func function, provided by the run-time

15
Grid superscalar current Globus implementation
  • Broker
  • Simple broker implemented
  • The run-time requests a hardware resource on
    demand
  • Broker has a list of machines and a maximum
    number of tasks allowed for each machine
  • khafre.cepba.upc.es /home/ac/cela/NAS/GridNPB4.0/b
    in/ 16
  • kadesh.cepba.upc.es /users1/upc/ac/cela/NAS/GridNP
    B4.0/bin/ 16
  • ...
  • Round Robin policy to assign the machines to the
    tasks

16
Grid superscalar current Globus implementation
  • File forwarding
  • Implemented with Dyninst
  • Allows executing different tasks on different
    hardware resources
  • Initial tests
  • High overhead due to worker behavior mutation
  • Future tests with new Dyninst version 4.0

17
GridSuperscalar code example
  • Simple optimization search example
  • perform N simulations
  • recalculate range of parameters
  • end when goal is reached
  • Tasks FILTER, DIMEMAS, EXTRACT
  • Parameters (current syntax)
  • OP_NAME n_in_files n_in_generic n_out_files
    n_out_generic
  • FILTER 1 2 1 0
  • DIMEMAS 2 0 1 0
  • EXTRACT 1 0 1 0
  • Sequential code

18
GridSuperscalar code example
  • Range initial_range()
  • while (!goal_reached() (jltMAX_ITERS))
  • for (i0 iltITERS i)
  • Li gen_rand_L_within_current_range(range)
  • BWi gen_rand_BW_within_current_range(range)
  • Execute (FILTER, bh.cfg, Li, BWi,
    bh_tmp.cfg)
  • Execute (DIMEM, bh_tmp.cfg,trace.trf,
    dim_out.txt)
  • Execute (EXTRACT, dim_out.txt,
    final_result.txt, final_result.txt)
  • GS_Barrier()
  • generate_new_range(final_result.txt, range)
  • j

19
GridSuperscalar code example
  • Worker
  • switch(atoi(argv2))
  • case FILTER res filter(argc, argv)
  • break
  • case DIMEM res dimemas_funct(argc, argv)
  • break
  • case EXTRACT res extract(argc, argv)
  • break
  • default printf("Wrong operation code\n")
  • break

20
Implemented applications
  • Performance modeling (Dimemas, Paramedir)
  • NAS Grid Benchmarks
  • Each is a Data Flow Graph (DFG), each node is a
    NPB instance (BT, SP, LU, MG or FT)
  • Implemented with GridSuperscalar prototype and
    correctly run in the CEPBA Grid
  • Client a Parsytec 8 SMP dual processor machine
  • Servers an IBM xSeries 250 with 4 Intel Pentium
    III, and an IBM Power4 node with 4 processors
  • Code much more simpler and clear than initial
    scripts
  • Bioinformatic application (production)

21
Preliminary results
  • Optimization search
  • NGB

22
Open issues
  • GridSuperscalar over GridRPC (Ninf-G)
  • Worker, Ninf-G remote library itself
  • Identical user application code
  • Change in the GridSuperscalar run-time code
  • grpc_call_async
  • grpc_wait_ for synchronization.
  • Next steps
  • Improve/add new features i.e., file transfer
    reduction
  • Improve user interface ? IDL, service oriented
    interfaces

23
Summary
  • The application development process is
    simplified.
  • The application can be written as if it was a
    sequential application, without taken into
    account the underlying middleware and Grid
    resources.
  • The underlying run-time library will extract the
    existent concurrency of the application, generate
    the tasks that should be run in the Grid
    resources, interface with brokers to get a
    resource, take care of the flow control ...
  • It is very easy to use. The user only has to
    decide which subroutines/programs are going to be
    run on the Grid and made slight changes to the
    code. Finally, the code is linked with our
    library.
  • Reading a two pages example the user can be ready
    to develop their application.

24
Summary
  • Our main problems have not been intrinsic to the
    grid technology, but mainly to programming
    issues. We detail some
  • Use of Dyninst for file forwarding mechanism. The
    idea is to forward directly the output file of
    one task to the next one that is reading that
    file, allowing to execute both tasks in parallel.
    However, the use of Dyninst has difficult the
    implementation of that idea.
  • Programming of the file renaming mechanism
  • Some difficulties due to lack of Globus APIs
    documentation
  • Standardization of middleware APIs
  • Debuggers for grid applications
Write a Comment
User Comments (0)
About PowerShow.com