Kickstart tutorial on using sciblade cluster - PowerPoint PPT Presentation

About This Presentation
Title:

Kickstart tutorial on using sciblade cluster

Description:

Title: Introduction to Parallelism Author: Project Assistant Last modified by: Morris Law Created Date: 7/27/2006 7:13:04 AM Document presentation format – PowerPoint PPT presentation

Number of Views:233
Avg rating:3.0/5.0
Slides: 50
Provided by: ProjectA7
Category:

less

Transcript and Presenter's Notes

Title: Kickstart tutorial on using sciblade cluster


1
Kickstart tutorial on using sciblade cluster
for potential and new users 8 October,
2011 High Performance Cluster Computing Centre
(HPCCC) Faculty of Science Hong Kong Baptist
University
2
Outline
  • Hardware configurations
  • Recent Software Installed
  • Basic Login and job submission procedure
  • Parallel Program Examples
  • Policy for using sciblade.sci.hkbu.edu.hk
  • Acknowledgement
  • http//www.sci.hkbu.edu.hk/hpccc/sciblade

2
3
Latest Cluster Hardware configurations
4
Cluster Hardware
  • This 256-node PC cluster (sciblade) consist of
  • Master node x 2
  • IO nodes x 3 (storage)
  • Compute nodes x 256
  • Blade Chassis x 16
  • Management network
  • Interconnect fabric
  • 1U console KVM switch
  • Emerson Liebert Nxa 120k VA UPS

4
5
Sciblade Cluster
256-node clusters supported by fund from RGC
5
6
Hardware Configuration
  • Master Node
  • Dell PE1950, 2x Xeon E5450 3.0GHz (Quad Core)
  • 16GB RAM, 73GB x 2 SAS drive
  • IO nodes (Storage)
  • Dell PE2950, 2x Xeon E5450 3.0GHz (Quad Core)
  • 16GB RAM, 73GB x 2 SAS drive
  • 3TB storage Dell PE MD3000
  • Compute nodes x 256 each
  • Dell PE M600 blade server w/ Infiniband network
  • 2x Xeon E5430 2.66GHz (Quad Core)
  • 16GB RAM, 73GB SAS drive

6
7
Hardware Configuration
  • Blade Chassis x 16
  • Dell PE M1000e
  • Each hosts 16 blade servers
  • Management Network
  • Dell PowerConnect 6248 (Gigabit Ethernet) x 6
  • Inerconnect fabric
  • Qlogic SilverStorm 9120 switch
  • Console and KVM switch
  • Dell AS-180 KVM
  • Dell 17FP Rack console
  • Emerson Liebert Nxa 120kVA UPS

7
8
Software List
  • Operating System
  • ROCKS 5.1 Cluster OS
  • CentOS 5.3 kernel 2.6.18
  • Job Management System
  • Portable Batch System
  • MAUI scheduler
  • Compilers, Languages
  • Intel Fortran/C/C Compiler for Linux V11
  • Intel Cluster Studio 2011
  • GNU 4.1.2/4.4.0 Fortran/C/C Compiler

8
9
Software List
  • Message Passing Interface (MPI) Libraries
  • MVAPICH 1.1
  • MVAPICH2 1.2
  • OPEN MPI 1.3.2
  • Mathematic libraries
  • ATLAS 3.8.3
  • FFTW 2.1.5/3.2.1
  • SPRNG 2.0a(C/Fortran) /4.0(C/Fortran)
  • ScaLAPACK 1.8.0

9
10
Software List
  • Molecular Dynamics Quantum Chemistry
  • Gamess 2009R1
  • Gaussian 03, Gaussian 09
  • Gromacs 4.0.7
  • LAMMPS
  • Namd 2.7b1
  • Siesta 3.0b
  • Third-party Applications
  • MATLAB 2008b with pmatlab
  • TAU 2.18.2, VisIt 1.11.2
  • Xmgrace 5.1.22

10
11
Software List
  • Queuing system
  • Torque/PBS
  • Maui scheduler
  • Editors
  • vi
  • emacs

11
12
Hostnames
  • Master node
  • External sciblade.sci.hkbu.edu.hk
  • Internal frontend-0
  • IO nodes (storage)
  • pvfs2-io-0-0, pvfs2-io-0-1, pvfs-io-0-2
  • Compute nodes
  • compute-0-0.local, , compute-0-255.local

12
13
Basic Login and Job Submission Procedure
14
Basic login
  • Remote login to the master node
  • Terminal login
  • using secure shell
  • ssh -l username sciblade.sci.hkbu.edu.hk
  • Graphical login
  • PuTTY vncviewer e.g.
  • username_at_sciblade vncserver
  • New sciblade.sci.hkbu.edu.hk3 (username)'
    desktop is sciblade.sci.hkbu.edu.hk3
  • It means that your session will run on display 3.

14
15
Graphical login
  • Using PuTTY to setup a secured connection Host
    Namesciblade.sci.hkbu.edu.hk

15
16
Graphical login (cont)
  • ssh protocol version

16
17
Graphical login (cont)
  • Port 5900 display numbe (i.e. 3 in this case)


17
18
Graphical login (cont)
  • Next, click Open, and login to sciblade
  • Finally, run VNC Viewer on your PC, and enter
    "localhost3" 3 is the display number
  • You should terminate your VNC session after you
    have finished your work. To terminate your VNC
    session running on sciblade, run the command
  • username_at_tdgrocks vncserver kill 3

18
19
Linux commands
  • Both master and compute nodes are installed with
    Linux
  • Frequently used Linux command in PC cluster
    http//www.sci.hkbu.edu.hk/hpccc/sciblade/faq_scib
    lade.php

cp cp f1 f2 dir1 copy file f1 and f2 into directory dir1
mv mv f1 dir1 move/rename file f1 into dir1
tar tar xzvf abc.tar.gz Uncompress and untar a tar.gz format file
tar tar czvf abc.tar.gz abc create archive file with gzip compression
cat cat f1 f2 type the content of file f1 and f2
diff diff f1 f2 compare text between two files
grep grep student search all files with the word student
history history 50 find the last 50 commands stored in the shell
kill kill -9 2036 terminate the process with pid 2036
man man tar displaying the manual page on-line
nohup nohup runmatlab a run matlab (a.m) without hang up after logout
ps ps -ef find out all process run in the systems
sort sort -r -n studno sort studno in reverse numerical order
19
20
ROCKS specific commands
  • ROCKS provides the following commands for users
    to run programs in all compute node. e.g.
  • cluster-fork
  • Run program in all compute nodes
  • cluster-fork ps
  • Check user process in each compute node
  • cluster-kill
  • Kill user process at one time
  • tentakel
  • Similar to cluster-fork but run faster

20
21
Ganglia
  • Web based management and monitoring
  • http//sciblade.sci.hkbu.edu.hk/ganglia

21
22
Job Submission Procedures
23
Job Submission Procedure
  • Prepare and compile a program, e.g.
  • mpicc o hello hello.c
  • Prepare a job submission script, e.g.
  • Qhello.pbs
  • Submit the job using qsub. e.g.
  • qsub Qhello.pbs
  • Note the jobID. Monitor with showq or qstat
  • Examine the error and output file. e.g.
  • hello.oJobID, hello.eJobID

23
24
Sample Program hello.c
include ltstdio.hgt include mpi.h
// MPI compiler header file void
main(int argc, char argv) int nproc,myrank,i
err ierrMPI_Init(argc,argv)
// MPI initialization // Get number of MPI
processes MPI_Comm_size(MPI_COMM_WORLD,nproc)
// Get process id for this
processor MPI_Comm_rank(MPI_COMM_WORLD,myrank)
printf (Hello World!! Im process d of
d\n,myrank,nproc) ierrMPI_Finalize()
// Terminate all MPI
processes
24
25
Compiling Running MPI Programs
  • Using mvapich 1.1
  • Setting path, at the command prompt, type
  • export PATH/u1/local/mvapich1/binPATH
  • (uncomment this line in .bashrc)
  • Compile using mpicc, mpiCC, mpif77 or mpif90,
    e.g.
  • mpicc o hello hello.c
  • Prepare hostfile (e.g. machines) number of
    compute nodes
  • compute-0-0
  • compute-0-1
  • compute-0-2
  • compute-0-3
  • Run the program with a number of processor node
  • mpirun np 4 machinefile machines ./hello

25
26
Prepare parallel job script, Qhello.pbs
  • !/bin/sh
  • Job name
  • PBS -N hello
  • Declare job non-rerunable
  • PBS -r n
  • PBS -l nodes20
  • PBS -l walltime000800
  • This job's working directory
  • cd PBS_O_WORKDIR
  • echo Running on host hostname
  • echo Time is date
  • echo Directory is pwd
  • echo This jobs runs on the following processors
  • echo cat PBS_NODEFILE
  • Define number of processors
  • NPROCSwc -l lt PBS_NODEFILE
  • echo This job has allocated NPROCS nodes
  • Run the parallel MPI executable hello"
  • /u1/local/mvapich1/bin/mpirun -v -machinefile
    PBS_NODEFILE -np NPROCS ./hello

26
27
Job submission and monitoring
  • Submit the job
  • qsub Qhello.pbs
  • Note the jobID. e.g.
  • 15238.sciblade2.sci.hkbu.edu.hk
  • Monitor by qstat. e.g qstat 15238
  • Job id Name User
    Time Use S Queue
  • ------------------------- ----------------
    --------------- -------- - -----
  • 15238.sciblade2 hello morris
    0 R default

28
Job monitoring
  • Show the status of submitted jobs
  • showq

13896 dhhe Running 16
INFINITY Mon May 3 044825 14402
dhhe Running 16 INFINITY Wed May 5
234609 14403 dhhe Running
16 INFINITY Wed May 5 234707 67
Active Jobs 2012 of 2024 Processors Active
(99.41) 253 of 253 Nodes
Active (100.00) IDLE JOBS------------------
---- JOBNAME USERNAME STATE PROC
WCLIMIT QUEUETIME 0 Idle
Jobs BLOCKED JOBS---------------- JOBNAME
USERNAME STATE PROC WCLIMIT
QUEUETIME 14951 ggl
Idle 32 4000000 Mon May 10 005519 15011
justin Idle 32
7000000 Mon May 10 154836 15098
hkbu09 Idle 50 33080000 Tue May
11 114645
  • Delete jobID by qdel. e.g.
  • qdel 15238

29
Assorted Program Examples
30
Example codes
  • Updated example codes have been stored in
    /u1/local/share/examples/
  • Copy all codes in one file /u1/local/share/example
    s.tar.gz
  • Unzip and Untar using
  • tar xzvf examples.tar.gz

31
Example 1 OpenMP
/u1/local/share/examples/omp
32
OpenMP
  • The OpenMP Application Program Interface (API)
    supports multi-platform shared-memory parallel
    programming in C/C and Fortran on all
    architectures, including Unix platforms and
    Windows NT platforms.
  • Jointly defined by a group of major computer
    hardware and software vendors.
  • OpenMP is a portable, scalable model that gives
    shared-memory parallel programmers a simple and
    flexible interface for developing parallel
    applications for platforms ranging from the
    desktop to the supercomputer.

32
33
OpenMP compiler choice
  • gcc 4.40 or above
  • compile with -fopenmp
  • Intel 10.1 or above
  • compile with Qopenmp on Windows
  • compile with openmp on linux
  • PGI compiler
  • compile with mp
  • Absoft Pro Fortran
  • compile with -openmp

33
34
Sample openmp example
  • include ltomp.hgt
  • include ltstdio.hgt
  • int main()
  • pragma omp parallelprintf("Hello from thread
    d, nthreads d\n", omp_get_thread_num(),
    omp_get_num_threads())

34
35
serial-pi.c
  • include ltstdio.hgt
  • static long num_steps 10000000
  • double step
  • int main ()
  • int i double x, pi, sum 0.0
  • step 1.0/(double) num_steps
  • for (i0ilt num_steps i)
  • x (i0.5)step
  • sum sum 4.0/(1.0xx)
  • pi step sum
  • printf("Est Pi f\n",pi)

35
36
Openmp version of spmd-pi.c
  • include ltomp.hgt
  • include ltstdio.hgt
  • static long num_steps 10000000
  • double step
  • define NUM_THREADS 8
  • int main ()
  • int i, nthreads double pi, sumNUM_THREADS
  • step 1.0/(double) num_steps
  • omp_set_num_threads(NUM_THREADS)
  • pragma omp parallel
  • int i, id,nthrds
  • double x
  • id omp_get_thread_num()
  • nthrds omp_get_num_threads()
  • if (id 0) nthreads nthrds
  • for (iid, sumid0.0ilt num_steps
    iinthrds)
  • x (i0.5)step
  • sumid 4.0/(1.0xx)

36
37
Submit parallel jobs into torque batch queue
  • Prepare a job script, say omp.pbs like the
    following
  • !/bin/sh
  • Job name
  • PBS -N OMP-spmd
  • Declare job non-rerunable
  • PBS -r n
  • Mail to user
  • PBS -m ae
  • Queue name (small, medium, long, verylong)
  • Number of nodes
  • PBS -l nodes1ppn8
  • PBS -l walltime000800
  • cd PBS_O_WORKDIR
  • export OMP_NUM_THREADS8
  • ./omp_hello
  • ./omp_test
  • ./serial-pi
  • ./omp-spmd-pi
  • Submit it using qsub

37
38
Example 2 Siesta 3.0b
  • Spanish Initiative for Electronic Simulations
    with Thousands of Atoms
  • perform electronic structure calculations and ab
    initio molecular dynamics simulations of
    molecules and solids.
  • Project website http//www.icmab.es/siesta
  • Example directory /u1/local/share/examples/siesta
    /h2o

39
Siesta example input file h2o.fdf
  • Input file Flexible data format (FDF), e.g.
    h2o.fdf
  • SystemName Water molecule
  • SystemLabel h2o
  • NumberOfAtoms 3
  • NumberOfSpecies 2
  • block ChemicalSpeciesLabel
  • 1 8 O Species index, atomic number,
    species label
  • 2 1 H
  • endblock ChemicalSpeciesLabel
  • AtomicCoordinatesFormat Ang
  • block AtomicCoordinatesAndAtomicSpecies
  • 0.000 0.000 0.000 1
  • 0.757 0.586 0.000 2
  • -0.757 0.586 0.000 2
  • endblock AtomicCoordinatesAndAtomicSpecies

40
Siesta sample pbs file h2o.pbs
  • !/bin/bash
  • PBS -N siesta-h2o
  • PBS -l nodes8
  • PBS -l walltime60000
  • PBS -l pmem512mb
  • NCPUwc -l lt PBS_NODEFILE
  • cd PBS_O_WORKDIR
  • MPIPATH/u1/local/mvapich2/bin
  • MPIPATH/mpirun_rsh -np NCPU -hostfile
    PBS_NODEFILE /u1/local/bin/siesta lt h2o.fdf
  • Submit the above h2o.pbs using qsub
  • qsub h2o.pbs

41
Example 3 pmatlab
  • Pmatlab developed by MIT Lincoln Laboratory
  • Installed with MATLAB 2008b
  • Example directory /u1/local/share/examples/pmatla
    b
  • Startup.m matlab startup file
  • RUN.m control file for running in compute nodes
  • sample_app.m main program
  • Qpmatlab.pbs submit script

42
Pmatlab idea of distributed matrix
  • New data type dmat
  • Overload functions zeros, ones, rand, with an
    additional parameter Map
  • Map tell pmatlab how and where dmat must be
    distributed
  • Four components
  • Grids, e.g 2 3, 2 x 3 grids
  • Distributions
  • block contiguous block of data
  • Cyclic data are interleaved with processors
  • Block cyclic
  • Processor lists, e.g. 0nCPUs

43
Pmatlab examples of map grid
44
Example Prime prime/prime.c prime/prime.f90 prim
e/primeParallel.c prime/Makefile prime/machines C
ompile by the command make Run the serial
program by ./primeC or ./primeF Run the
parallel program by mpirun np 4 machinefile
machines ./primeMPI
Example Ring ring/ring.c ring/Makefile ring/mach
ines Compile program by the command make Run
the program in parallel by mpirun np 4
machinefile machines ./ring lt in
Example mcPi mcPi/mcPi.c mcPi/mc-Pi-mpi.c mcPi/
Makefile mcPi/QmcPi.pbs Compile
by the command make Run the serial program by
./mcPi Submit job to PBS queuing system
by qsub QmcPi.pbs
Example Sorting sorting/qsort.c sorting/bubbleso
rt.c sorting/script.sh sorting/qsort
sorting/bubblesort Submit job to PBS queuing
system by qsub script.sh
44
45
Policy for using sciblade.sci.hkbu.edu.hk
46
Policy
  1. Every user shall apply for his/her own computer
    user account to login to the master node of the
    PC cluster, sciblade.sci.hkbu.edu.hk.
  2. The account must not be shared his/her account
    and password with the other users.
  3. Every user must deliver jobs to the PC cluster
    from the master node via the PBS job queuing
    system. Automatically dispatching of job using
    scripts or robots are not allowed.
  4. Users are not allowed to login to the compute
    nodes.
  5. Foreground jobs on the PC cluster are restricted
    to program testing and the time duration should
    not exceed 1 minutes CPU time per job.

47
Policy (continue)
  • Any background jobs run on the master node or
    compute nodes are strictly prohibited and will be
    killed without prior notice.
  • The current restrictions of the job queuing
    system are as follows,
  • The maximum number of running jobs in the job
    queue is 8.
  • The maximum total number of CPU cores used in one
    time cannot exceed 512.
  • The restrictions in item 7 will be reviewed
    timely for the growing number of users and the
    computation need.

48
Good Practice in using sciblade
  • logout from the master node after use
  • delete unused files or compress temporary data
  • estimate the walltime for running jobs and
    acquire just enough walltime for running.
  • never run foreground job within the master node
    and the compute node
  • report abnormal behaviours.

49
Acknowledgement
  • When you make presentations or publish papers, we
    would appreciate it if you would kindly
    acknowledge the HPCCC by including
  • "This research was conducted using the resources
    of the High Performance Cluster Computing Centre,
    Hong Kong Baptist University, which receives
    funding from Research Grant Council, University
    Grant Committee of the HKSAR and Hong Kong
    Baptist University."
  • Use of Center resources constitutes an agreement
    to provide copies of any publication or news
    stories concerning research conducted using our
    systems and/or consulting services.
  • Please send acknowledgement e-mail to
    hpccc_at_sci.hkbu.edu.hk. Thank you
Write a Comment
User Comments (0)
About PowerShow.com