Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF

Description:

Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF Chris Hill ESMF Community Meeting MIT, July 2005 Outline MITgcm a very quick ... – PowerPoint PPT presentation

Number of Views:75
Avg rating:3.0/5.0
Slides: 20
Provided by: ChrisH211
Category:

less

Transcript and Presenter's Notes

Title: Adoption and field tests of M.I.T General Circulation Model (MITgcm) with ESMF


1
Adoption and field tests of M.I.T General
Circulation Model (MITgcm) with ESMF
  • Chris Hill
  • ESMF Community Meeting
  • MIT, July 2005

2
Outline
  • MITgcm a very quick overview
  • algorithmic features
  • software characteristics
  • Adopting ESMF
  • strategy
  • steps
  • Field test applications
  • MITgcm coupling with everything (including
    itself!) - interoperating with NCAR, GFDL and
    UCLA atmosphere models, intermediate complexity
    coupled system.
  • high-end parameterization as a coupled problem.
  • Next steps

3
MITgcm algorithmic characteristics
  • General orthogonal curvilinear coordinate
    finite-volume dynamical kernel.
  • Flexible, scalable domain decomposition 1CPU ?
    2000 CPUs.
  • Can apply to wide range of scales, hydrostatic ?
    non-hydrostatic.
  • Pressure-height isomorphism allows kernel to
    apply to ocean or atmosphere.
  • Many optional packages spanning biogeochemistry,
    atmospheric physics, boundary layers, sea-ice
    etc
  • Adjoints to most parts for assimilation/state-esti
    mation and sensitivity analysis.

and more. see http//mitgcm.org
HYDROSTATIC
NON-HYDROSTATIC
4
MITgcm software characteristics
  • Fortran (what else ?)
  • Approx 170K executable statements
  • Generic driver code (superstructure), coupling
    code, computational kernel code and parallelism,
    I/O etc support code (infrastructure) are
    modularized ? aligns with ESMFs sandwich
    architecture.
  • Target hardware my laptop to largest
    supercomputers (Columbia, Blue Genes) ? it tries
    to be portable!
  • OSes - linux, HPUX, Solaris, AIX etc
  • Parallel - MPI parallelism binding, threads
    parallelism binding (dormant), platform specific
    parallelism library support e.g active messages,
    shmem (dormant).
  • Distributed openly on web. Supported through
    userdeveloper mailing list, website. Users all
    over the world.

5
Outline
  • MITgcm a very quick overview
  • algorithmic features
  • software characteristics
  • Adopting ESMF
  • strategy
  • steps
  • Field test applications
  • MITgcm coupling with everything (including
    itself!) - interoperating with NCAR, GFDL and
    UCLA atmosphere models, intermediate complexity
    coupled system.
  • high-end parameterization as a coupled problem.
  • Next steps

6
Adoption strategy
  • Currently only in-house (i.e. ESMF binding not
    part of default distribution). Practical
    consideration as many MITgcm user systems do not
    have ESMF installed.
  • Set of ESMF experiments maintained in MITgcm CVS
    source repository that we keep up to date with
    latest ESMF (with one/two week lag).
  • These experiments use
  • ESMF component model ( init(), run(), finalize()
    )
  • Clocks, configuration attributes, field
    communications
  • Primarily sequential mode component execution
    (more on this later)

7
Adoption steps top level
  • Introduction of internal init(), run(),
    finalize().
  • Development of couplers (and stub components to
    test against)
  • coupler_init(), coupler_run()
  • Development of drivers
  • driver_run(), driver_init()
  • Code can be seen under CVS repository at
    mitgcm.org. MITgcm_contrib/ESMF

8
Outline
  • MITgcm very quick overview
  • algorithmic features
  • software characteristics
  • Adopting ESMF
  • strategy
  • steps
  • Field test applications
  • MITgcm coupling with everything (including
    itself!) - interoperating with NCAR, GFDL and
    UCLA atmosphere models.
  • high-end parameterization as a coupled problem.
  • Next steps

9
Field test M.I.T. General Circulation Model
(MITgcm) to NCAR Community Atmospheric Model
(CAM).
  • Versions of CAM and MITgcm were adapted to
  • have init(), run(), finalize() interfaces
  • accept, encode and decode ESMF_state variables
  • A coupler component that maps MITgcm grid to CAM
    grid was written

Kluzek, Hill
180x90 on 1x16 PEs
128x64 on 1x16 PEs
Runtime steps MITgcm prepares export
state. Export state passes through parent to
coupler Coupler returns CAM gridded SST array
which is passed as import state to
CAM gridded component.
1
2
3
  • Uses ESMF_GridComp, ESMF_CplComp and ESMF_Regrid
    sets of functions.

10
Field test M.I.T. General Circulation Model
(MITgcm) to GFDL Atmosphere/Land/Ice (ALI).
  • Versions of MOM and MITgcm were adapted to
    components
  • work within init(), run(), finalize() interfaces
  • accept, encode and decode ESMF_state variables
  • A coupler component that maps MITgcm grid to ALI
    grid was written
  • MITgcm component is substituted for MOM component
    with MITgcm-ALI coupler

Smithline, Zhou, Hill
144x90 on 16x1 PEs
128x60 on 1x16 PEs
Runtime steps MITgcm prepares export
state. Export state passes through parent to
coupler Coupler returns ALI gridded SST array
which is passed to ALI.
1
2
3
  • Uses ESMF_GridComp, ESMF_CplComp and ESMF_Regrid
    sets of functions.

11
SI experiment M.I.T. General Circulation Model
(MITgcm) ECCO assimilation ocean and POP to UCLA
atmosphere.
Obs. analysis
3 mo. forecast A
3 mo. forecast B
  • Uses ESMF_GridComp, ESMF_CplComp and ESMF_Regrid
    sets of functions.

12
New app High-end resolution embedding as a
coupled problem.
For a climate related ocean simulation domain
decomposition is limited on the number of
processors it can usefully scale to. For 1O
model maybe no scaling beyond 64 cpus This
limit is because parallelism costs (comm
overhead, overlap computations) exceed
parallelism benefits.
0
1
2
5
3
4
6
7
Question Are there other things beside ensembles
of runs we can do with a thousand processor
system? Increasing resolution is hard because
explicit scheme timesteps drop with resolution
not good for millenial simulations.
13
New app High-end resolution embedding as a
coupled problem.
What about embedding local sub-models, running
concurrently on separate processors but coupled
to coarse resolution run.
65
66
0
1
2
5
67
3
4
6
7
68
319
14
New app High-end resolution embedding as a
coupled problem.
What about embedding local sub-models, running
concurrently on separate processors but coupled
to coarse resolution run.
65
66
0
1
2
5
67
3
4
6
7
68
319
15
Implementation with ESMF
  • ESMF provides nice tools for developing this
    embedded system
  • component model abstraction for managing
    different pieces
  • parallel regrid/redist provides great tool for N
    to M coupling. regrid()/redist() precompute data
    flows at initialization. At each timestep
    resolving data transport between 300-400
    components is about 15 lines of user code.

Top component
0
1
2
5
3
4
6
7
316
64
sub-components

317
65
318
66
319
67
sub-sub-components
16
MITgcm with ESMF next steps
  • Continued work in house. Directions
  • Embedding with dynamic balancing
  • High-resolution ocean and coupled work
  • ESMF in default MITgcm distribution
  • Most MITgcm user systems do not have ESMF
    installed yet. This will take time to change
    how long?
  • Hopeful that in the next year this will evolve.

17
Summary
  • ESMF implementation functionality has grown
    significantly over last year
  • optimized regrid/redist scaling
  • concurrent components
  • Performance is always within factor of 2 of
    custom code at infrastructure level, at
    superstructure (code driver, coupling) ESMF
    overhead is comparable to our own code.

18
  • coupler_init()
  • coupler_run()

BACK
19
  • driver_init()
  • driver_run()

BACK
Write a Comment
User Comments (0)
About PowerShow.com