VORTONICS: Vortex Dynamics on Transatlantic Federated Grids - PowerPoint PPT Presentation

1 / 6
About This Presentation
Title:

VORTONICS: Vortex Dynamics on Transatlantic Federated Grids

Description:

Vortical reconnection governs establishment of steady-state in Navier-Stokes turbulence ... Exact mechanism and space/time scales unknown and represent ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 7
Provided by: cru72
Category:

less

Transcript and Presenter's Notes

Title: VORTONICS: Vortex Dynamics on Transatlantic Federated Grids


1
VORTONICS Vortex Dynamicson Transatlantic
Federated Grids
US-UK TG-NGS Joint Projects Supported by NSF,
EPSRC, and TeraGrid
2
Scientific Computational Challenges
  • Physical challenges Reconnection and Dynamos
  • Vortical reconnection governs establishment of
    steady-state in Navier-Stokes turbulence
  • Magnetic reconnection governs heating of solar
    corona
  • The astrophysical dynamo problem
  • Exact mechanism and space/time scales unknown and
    represent important theoretical challenges
  • Mathematical challenges
  • Identification of vortex cores, and discovery of
    new topological invariants associated with them
  • Discovery of new and improved analytic solutions
    of Navier-Stokes equations for interacting
    vortices
  • Computational challenges Enormous problem sizes,
    memory requirements, and long run times
  • Algorithmic complexity scales as cube of Re
  • Substantial postprocessing for vortex core
    identification
  • Largest present runs and most future runs will
    require geographically distributed domain
    decomposition (GD3)
  • Is GD3 on Grids a sensible approach?

Homogeneous turbulence driven by force of
Arnold-Beltrami-Childress (ABC) form
3
Lattice Remapping, Fourier Resizing,and
Computational Steering
  • At its lowest level, VORTONICS contains a general
    remapping library for dynamically changing the
    layout of the computational lattice across the
    processors (pencils, blocks, slabs) using MPI
  • All data on computational lattice can be Fourier
    resized (FFT, augmentation or truncation in k
    space, inverse FFT) as it is remapped
  • All data layout features are dynamically
    steerable
  • VTK used for visualization (each rank computes
    polygons locally)
  • Grid-enabled with MPICH-G2 so that simulation,
    visualization, and steering can be run anywhere,
    or even across sites

4
Vortex Core Identification and Visualization
  • Volumetric methods
  • Vorticity magnitude thresholding
  • Q criterion
  • D criterion
  • l criterion
  • Extremal Line Integral (ELI) method
  • Vortex cores are curves along which the line
    integral of vorticity is an extremum in the space
    of all curves
  • VORTONICS includes Ginzburg-Landau solver for
    finding and plotting ELI space curves
  • Addresses problems of reconnection
  • Reconnection of elliptic vortex rings
  • Aref Zawadski (1991)
  • Two publications available on ELI (Phil. Trans.
    Roy. Soc. Physica A)

5
Run Sizes to Date / Performance
  • Multiple Relaxation Time Lattice Boltzmann
    (MRTLB) model for Navier-Stokes equations
  • 600,000 SUPS/processor when run on one
    multiprocessor
  • Performance scales linearly with np when run on
    one multiprocessor
  • 3D lattice sizes up to 6453 run prior to SC05
    across six sites on TG/NGS
  • NCSA, SDSC, ANL, TACC, PSC, CSAR
  • 528 CPUs to date, and larger runs in progress as
    we speak!
  • Amount of data injected into network. Strongly
    bandwidth limited.
  • Effective SUPS/processor
  • Reduced by factor approximately equal to number
    of sites
  • Therefore SUPS approximately constant as problem
    grows in size

6
Discussion / Performance Metric
  • We are aiming for lattice sizes that can not
    reside at one SC Center, but
  • Bell, Gray, Szalay, PetaScale Computational
    Systems Balanced CyberInfrastructure in a
    Data-Centric World (September, 2005)
  • If data can be regenerated locally, dont send it
    over the grid! (105 ops/byte)
  • Higher disk to processing ratios - large disk
    farms
  • Thought experiment
  • Enormous lattice, local to one SC Center, by
    swapping n portions to disk farm
  • If we can not exceed this performance, it is not
    worth using the Grid for GD3
  • Make the very optimistic assumption that disk
    access time is not limiting
  • Clearly total SUPS constant, since it is one
    single multiprocessor
  • Therefore SUPS/processor degrades by 1/n, so SUPS
    is constant
  • We can do that now. That is precisely the
    scaling that we see now. GD3 is a win!
  • And things are only going to get better
  • Improvements in store
  • UDP with added reliability (UDT) in MPICH-G2 will
    improve bandwidth
  • Multithreading in MPICH-G2 will overlap
    communication and computation to hide latency and
    bulk data transfers
  • Disk swap in volume, interprocessor
    communications on surface, keep in processors!
Write a Comment
User Comments (0)
About PowerShow.com