Title: CitcomS Tutorial
1CitcomS Tutorial
- Eh Tan
- March 27, 2007
- CIG Training Session, EarthScope
2Installation
- Build dependency
- MPI library
- Python 2.3 or higher (2.4 if using 64-bit
machine), including header files - Visualization packages
- OpenDX
- GMT
3Networkless Installation
- In the CD-ROM/USB-drives, Earthscope07/ directory
- untar CitcomS-2.2.0.tar.gz to your disk
- cp merlin-1.1.egg CitcomS-2.2.0
- cp -r deps/ CitcomS-2.2.0
- Source packages for python and opendx
4Configuring CitcomS
- ./configure options
- Options include
- --prefix where to install CitcomS (see notes
about make install on a later slide) - --without-pyre if you dont have python or dont
like to use Pyre framework (not recommended) - CCyour_c_compiler
- CFLAGScompiler_options (eg CFLAGS-g for a
debugging build) - LDFLAGSlinker_options (eg LDFLAGS-L/opt/mpich/l
ib for non-standard MPI library location) - Try to configure without any options first
5Troubleshooting 1
- Common pitfall 1 python is too old
- Solution install newest python from your OS
vendor (or compile from the source package) - the new python would be in /usr/bin/python2.x (or
/usr/local/bin/python2.x) - create a sym-link of python2.x in your directory
- ln -s /usr/bin/python2.x HOME/bin/python
- prepend HOME/bin to your PATH in your .bashrc or
.cshrc
6Troubleshooting 2
- Common pitfall 2 Python.h not found
- Solution install the python-dev package from
your OS vendor (or compile from the source
package)
7Compiling Python from Source (Optional)
- Download python source package from
http//www.python.org - Build and install the package
- ./configure --prefix/path/
- make make install
- Using this python by default
- create a sym-link of python2.x in your directory
- ln -s /path/bin/python2.x HOME/bin/python
- prepend HOME/bin to your PATH in your .bashrc or
.cshrc
8Your Configuration
- If ./configure finished without problems, you
should see a summary - If ./configure finished with errors, see
config.log for the detailed error information
Configuration Summary
PYTHON
/sw/bin/python PYTHONPATH CC
mpicc CFLAGS -g -O2 CPPFLAGS
LDFLAGS with-pyre yes
with-hdf5 no
9Building CitcomS
- Build -- make
- Install -- make install
- this step is optional (and is not recommended for
most users) - only when you are building CitcomS for all users
on the system - export PATHPREFIX/visualPATH
- add this line to your .bashrc or .cshrc
- required for post-processing scripts
10Visualization Packages
- OpenDX (Data Explorer)
- install the dx package from your OS vendor
- build from the source package (http//www.opendx.o
rg) - GMT (Generic Mapping Tools)
- isnt it already on your system?
- http//gmt.soest.hawaii.edu
11Meshes
12Regional Mesh
40º N
Grid lines are parallel to longitude
and latitude The whole domain can
be partitioned into NxMxL processors
10º N
40º E
0º E
13Global Mesh
- 12 caps
- Each cap extends from the surface to the CMB
- Each cap can be partitioned into NxNxM processors
- 12xNxNxM processors in total (N4 in this figure)
14Topology of 12 Caps
N
N
N
N
9
0
3
6
9
0
10
1
4
7
10
1
2
5
8
2
11
11
S
S
S
S
N North Pole S South Pole
15Topology of 12 Caps
N
N
N
N
9
0
3
6
9
0
10
1
4
7
10
1
2
5
8
2
11
11
0º E
S
S
S
S
N North Pole S South Pole
16Parallel Partitioning
- Within each cap, the nodes are logically
Cartesian - Number of nodes within each cap specified by
(nodex nodey nodez) - Number of processors within each cap specified by
(nprocx nprocy nprocz) - Each processor has the same number of nodes.
- x-direction ((nodex - 1) / nprocx) 1
1
5
9
proc1
proc2
17Coordinate System
- ? colatitude (from the North pole to south)
- ? latitude (from west to east)
- r radius (from CMB to surface)
18Ordering of Nodes
- In a regional mesh, the nodes are ordered by
?-?-r - r-direction increases first, then ?-direction,
then ? direction - In a global mesh, the nodes are ordered in a
similar way - the grid lines are not parallel to longitude nor
latitude, so ? and ? both changes
19Solver
20Physics
- Solve the viscous flow within a bounded, rigid
mesh - regional mesh is bounded by all sides
- global mesh is bounded by top/bottom
- The mesh is Eulerian, static, non-deforming
21Solver
- Primary variables V, P, T, C
- Velocity and pressure solver, requires ? and ?
- ? is a function of T, C (P-dependency will be
included in the later version) - ? can be a function of T, P, C, V
- Temperature solver, requires ?, Cp, and ?
- Cp and ? are constant in the code
- Composition solver, ratio tracer method
22Timestepping
- Given Vi-1 at ti-1, advect Ti-1 and Ci-1 to Ti
and Ci - Compute ?i, according to Ti and Ci
- Compute ?i , according to Ti and Ci
- Compute Vi , according to ?i and ?i
23Running CitcomS
24Running serial jobs
- examples/example0.cfg
- simple 1-processor job
- confirm your installation is working
25CitcomS steps 5 CitcomS.controller monitori
ngFrequency 1 CitcomS.solver datafile
example0 CitcomS.solver.mesher nprocx
1 nprocy 1 nodex 9 nodey 9 nodez 9
Basic Format of the Input File
CitcomS.section.subsection parameter value
26CitcomS steps 5 CitcomS.controller monitori
ngFrequency 1 CitcomS.solver datafile
example0 CitcomS.solver.mesher nprocx
1 nprocy 1 nodex 9 nodey 9 nodez 9
of time step
interval of output
prefix of the output filename
of processors
of nodes
27Changing Parameters
- from the system default file
- HOME/.pyre/CitcomS/CitcomS.cfg
- useful for system-wide configuration, eg
configuration for your cluster - from the input file
- bin/citcoms common.cfg case1.cfg
- bin/citcoms common.cfg case2.cfg
- from the command line
- bin/citcoms --section.subsection.parametervalue
28Screen Output
tan2_at_ifox tracer bin/citcoms
examples/example0.cfg Problem has 9 x 9 x 9
nodes initialization time 0.284152 initial
residue of momentum equation F 1.236970360e-01
2187 AhatP (000) after 0.24873 seconds with
div/v3.020e-03 dv/v0.000e00 and dp/p0.000e00
for step 0 AhatP (001) after 0.480835 seconds
with div/v9.016e-04 dv/v3.544e-01 and
dp/p1.000e00 for step 0 AhatP (002) after
0.686722 seconds with div/v2.487e-04
dv/v7.652e-02 and dp/p2.893e-01 for step 0
. . .
29relative size of pressure correction
relative size of velocity correction
accuracy of continuity eqn
tan2_at_ifox tracer bin/citcoms
examples/example0.cfg Problem has 9 x 9 x 9
nodes initialization time 0.284152 initial
residue of momentum equation F 1.236970360e-01
2187 AhatP (000) after 0.24873 seconds with
div/v3.020e-03 dv/v0.000e00 and dp/p0.000e00
for step 0 AhatP (001) after 0.480835 seconds
with div/v9.016e-04 dv/v3.544e-01 and
dp/p1.000e00 for step 0 AhatP (002) after
0.686722 seconds with div/v2.487e-04
dv/v7.652e-02 and dp/p2.893e-01 for step 0
. . .
30Accuracy Settings
- Parameters affecting the accuracy of the velocity
solver - under CitcomS.solver.vsolver
- tole_compressibility if either one of (div/v,
dv/v, dp/p) is smaller than tole_compressibility,
finishing the iterations - piterations max. number of iterations
- accuracy accuracy of the linear equation solver
(cgrad or multigrid) - The default values are conservative
31Running parallel jobs
- examples/example1.cfg
- simple 4-processor job
- confirm your parallelism setup is correct
32CitcomS steps 71 CitcomS.controller monitor
ingFrequency 10 CitcomS.solver datafile
example1 CitcomS.solver.mesher nprocx
2 nprocy 2 nodex 17 nodey 17 nodez 9
of processors
33Launching Parallel Jobs
- Most common case a cluster without scheduler
- under CitcomS.launcher
- nodegen printf-style string to generate the
machine hostnames - nodelist comma-seperated list of machine name
- Ex nodegenm03d nodelist1,3-5gt m001,
m003, m004, m005 - Put nodegen in a system default file
34Launching Parallel Jobs
- Less common case a cluster with scheduler
- supported schedulers LSF, PBS, Globus
- case-by-case
- Less common case single workstation
- similar to serial jobs
35Basic Troubleshooting
- The error message is usually helpful, but could
be hard to find - Several places to look for error messages
- Look at the screen output
- Look at the end of the log file datafile.log
- If using tracers, look at the end of the tracer
log files datafile.tracer_log.rank
36Understanding the Output
- Output format can be either ASCII or HDF5 (will
always use ASCII in this tutorial) - Output directory specified by datadir
- Special strings in datadir
- HOSTNAME replaced by the hostname of the
computer - RANK replaced by the MPI rank of the computer
- DATADIR replaced by the returning string of a
script
37Output Files
- Plenty of output files
- Each has a name like datafile.xxxx.rank.step
- datafile prefix
- rank MPI rank
- step time step, the interval is controlled by
the parameter controller.monitoringFrequency - Coordinate output datafile.coord.rank
- Timing output datafile.time
- Formats of the output files can be found in the
manual (Appendix C)
38Output Files
- datafile.xxxx.rank.step, xxxx can be
- velo velocity and temperature
- visc viscosity
- surf/botm topography and heatflux
- stress deviatoric stress tensor
- pressure pressure
- geoid geoid (in spherical harmonics
coefficients) - tracer tracer information
- comp_el/comp_nd composition (on elements or
nodes)
39Output Optional Data
- Except coord, velo, and visc, other data output
can be turned on/off - surf and botm data are output by default (but can
be turned off) - Eg this turned on pressure and stress, but
turned off surf and bottm
CitcomS.solver.output output_optional
pressure,stress
40Combining the Data
- Plenty of data files, scattered over the cluster
nodes - Combining the data into one file per cap per time
step - autocombine.py
- see the usage by running autocombine.py without
any options
41Autocombine.py
- usage autocombine.py machinefile inputfile step1
step2 ... - machinefile filename of the MPI machinefile, or
localhost if the data are in a local file system - inputfile the original input file, or the
pidfile (pid12345.cfg)
42Combined Results
- Each cap has two files, eg
- prefix.cap05.100, this is the data file
- prefix.cap05.100.general, this is the header file
for OpenDX
43Visualization in GMT
- plot_layer.py
- plot horizontal cross section, for both regional
and global versions - plot_annulus.py
- plot radial cross section, for global version
only - sample data files in visual/samples/
44plot_layer.py
45plot_annulus.py
46Visualization in OpenDX
47Multigrid Parameters
- Solvermultigrid turn on multigrid solver
- levels1 levels of nested multigrid, must be
compatible with mesh size - mg_cycle1 V-cycle or W-cycle
- down_heavy3 number of smoothing iterations
during downward cycles - up_heavy3 number of smoothing iterations
during upward cycles - vlowstep1000 number of smoothing iterations
during the lowest cycle - vhighstep3 number of smoothing iterations
during the highest cycle
48Cookbooks
49Cookbook 1 - Global Model
50Cookbook 2 - Uniform Velocity Boundary Conditions
51Cookbook 3 - Temperature-Dependent Viscosity
52Cookbook 4 - Mesh Refinement
53Cookbook 5 - Time-Dependent Velocity Boundary
Conditions
54Cookbook 6 - Pseudo-Free-Surface Topography
55Cookbook 7 - Thermo-Chemical Convection
56Getting Help
- Contact me tan2_at_geodynamics.org
- Post on the mailing list cig-mc_at_geodynamics.org
- Report a bug http//www.geodynamics.org/bugs