Title: Introduction to ARSC Systems and Services
1 Introduction to ARSC Systems and Services
- Derek Bastille
- bastille_at_arsc.edu
- 907-450-8643
- User Consultant/Group Lead
2Outline
- About ARSC
- ARSC Compute Systems
- ARSC Storage Systems
- Available Software
- Account Information
- Utilization / Allocations
- Questions
3About ARSC
- We are not a government site.
- Owned and operated by the University of Alaska
Fairbanks. - Focus on Arctic and other Polar regions
- Involved in the International Polar Year
- Host a mix of HPCMP and non-DoD users
- On-site staff and faculty perform original
research in fields like Oceanography, Space
Weather Physics, Vulcanology and Large Text
Retrieval
4About ARSC
- Part of the HPCMP as an Allocated Distributed
Center - Participated in various Technology Insertion
initiatives - New Cray system acquired as part of TI-08)
- Allocate 70 of available cycles to HPCMP
projects and users - Locally allocate remaining 30 to non-DoD
Universities and other government agencies - Connectivity is primarily via DREN OC12
- Host Service Academy cadets during the summer
along with other academic interns
5About ARSC
- An Open Research Center
- All ARSC systems are open research
- Only unclassified and non-sensitive data
- Can host US citizens or foreign nationals who are
NAC-less - Also host undergraduate and graduate courses
through the University of Alaska - Happy to work with other Universities for student
and class system usage
6About ARSC
- Wide variety of Projects
- Computational Technology Areas
7About ARSC
of Projects
of Jobs
CPU Hrs
8ARSC Systems
- Iceberg AK6
- IBM Power4 (800 core)
- 5 TFlops peak
- 92 p655 nodes
- 736 1.5 Ghz CPUs
- 2 p690 nodes
- 64 1.7 Ghz CPUs
- 25 TB Disk
- Will be retired on
- 18 July, 2008
9ARSC Systems
- Midnight AK8
- SuSE Linux 9.3 Enterprise
- All nodes have 4GB per core
- 358 X2200 Sun Fire nodes
- 2 dual core 2.6 Ghz Opterons/node
- 55 X4600 Sun Fire nodes
- 8 dual core 2.6 Ghz Opterons/node
- Voltaire Infiniband switch
- PBSPro
- 68Tb Lustre Filesystem
10ARSC Systems
- Pingo
- Cray XT5
- 3,456 2.6Ghz Opteron Cores
- 4Gb per core
- 13.5 TB total memory
- 432 Nodes
- 31.8 TFlops peak
- SeaStar interconnect
- 150 TB storage
- Working towards FY2009 availability (October
2008)
11ARSC Systems - Storage
- Seawolf / Nanook
- SunFire 6800
- 8 900 Mhz CPUs
- 16 GB total memory
- 20 TB local (seawolf)
- 10 TB local (nanook)
- Fibre Channel to STK silo
- ARCHIVE NFS mounted
- Storage Tek Silo
- SL8500
- gt 3 PB theoretical capacity
- STK T10000 T9940 drives
12ARSC Systems - Data Analysis
- Discovery Lab
- MD Flying Flex
- Multichannel audio and video
- Located on UAF campus
- Other Linux / OSX workstations available for
post-production, data analysis, animation and
rendering - Access Grid Nodes
- Collaborations with many UAF departments
13ARSC Systems - Software
- All the usual suspects
- Matlab, ABAQUS, NCAR, Fluent, etc
- GNU tools, various libraries, etc.
- Several HPCMP Consolidated Software Initiative
packages and tools - Several compilers on Midnight
- Pathscale, GNU, Sun Studio
- www.arsc.edu/support/resources/software.phtml
14Access Policies
- Similar access policies to other HPCMP centers
- All logins to ARSC systems are via kerberized
clients - ssh, scp, kftp, krlogin
- ARSC issues SecurID cards for the ARSC.EDU
Kerberos realm - Starting to implement PKI infrastructure
- PKI is still a moving target for HPCMP at this
point - All ARSC systems undergo regular HPCMP CSA checks
and the DITSCAP/DIACAP process
15Access Policies
- Open Center Access
- Only HPCMP center to be Open Access for all
systems - National Agency Checks not required
- Nominal restrictions on Foreign Nationals
- Must apply from within US
- Need to not be in the TDOTS
- Must provide valid passport entry status
- Information Assurance Awareness training is
required
16Access Policies
- Security Policies
- www.arsc.edu/support/policy/secpolicy.html
- Dot file permissions and some contents routinely
checked by scripts - Kerberos passphrases expire every 180 days
- Accounts placed in an inactive status after 180
days of not logging in - Please ask us if you have any questions
17Application Process - DoD
- HPCMP users need to use pIE and work with their
S/AAA - ARSC has a cross-realm trust with other MSRCs, so
principals such as HPCMP.HPC.MIL can be used - We are assuming that most UH researchers will be
applying as Non-DoD accounts
18Application Process - Non-DoD
- Non-DoD users and projects are handled internally
by ARSC - www.arsc.edu/support/accounts/acquire.html
- Application forms and procedures
- ARSC will issue and send the SecurID cards
- Allocations based on federal FY (1 Oct - 30 Sep)
- Granting of resources is dependent on how much of
the 30 allocation remains - Preference given to UA researchers and affiliates
and/or Arctic related science
19Application Process - Non-DoD
- You may apply for a project if you are a
qualified faculty member or researcher - Students can not be a Primary Investigator
- Faculty sponsor is required, but the sponsor does
not need to be an actual user of the systems - Students are then added to the project as a user
- PIs are requested to provide a short annual
report outlining project progress and any
published results - Allocations of time are granted to projects
- Start-up accounts have a nominal allocation
- Production projects have allocations based on
need and availability
20Application Process - Non-DoD
- Users apply for access as part of the Project
- PIs will need to email approval before we add any
user to a project - ARSC will mail a SecurID card (US Express mail)
once the account has been created - A few things are needed to activate the account
- Signed Account Agreement and SecurID receipt
- IAA training completion certificate
- Citizenship/ID verification
- See www.arsc.edu/support/accounts/acquire.htmlpr
oof_citizenship
21ARSC Systems - Utilization
- Job usage compiled/uploaded to local database
daily - Allocation changes posted twice daily
- PIs will be automatically notified when their
project exceeds 90 of its allocation and when it
runs out of allocation - Users can check usage by invoking show_usage
- show_usage -s for all allocated systems
- More detailed reports available upon request
22ARSC Systems - Utilization
- To ltthe PIgt
- From ARSC Accounts lthpc-accounts_at_arsc.edugt
- Subject ARSC midnight Project Utilization and
Allocation Summary - Consolidated CPU Utilization Report
-
- FY 2008
- ARSC System midnight
- ARSC Group ID ltGROUPgt
- Primary Investigator ltPI Namegt
- Cumulative usage summary for October 1, 2007
through 15 Mar 2008. - Foreground Background Total
- ---------- ---------- ----------
- Allocation 150000.00
- Hours Used 126432.97 2.59 126435.56
-
23ARSC Systems - Queues
- Invoke news queues on iceberg to see current
queues - Load Leveler used for scheduling
- http//www.arsc.edu/support/howtos/usingloadlevele
r.html
Name MaxJobCPU MaxProcCPU
Free Max Description
dhhmmss dhhmmss Slots Slots
--------------- --------------
-------------- ----- ----- ---------------------
data 003500 003500
14 14 12 hours, 500mb, network nodes debug
1080500 1080500 16
32 01 hours, 4 nodes, debug p690
21080500 21080500 64 64 08 hours,
240gb, 64 cpu single 56000500
56000500 113 704 168 hours, 12gb, 8 cpu
bkg 85080500 85080500
100 704 08 hours, 12gb, 256 cpu standard
170160500 170160500 113 704 16
hours, 12gb, 256 cpu challenge
768000500 768000500 113 704 48 hours,
12gb, 384 cpu special unlimited
unlimited 113 736 48 hours, no limits
cobaltadm unlimited unlimited
3 4 cobalt license checking
--------------------------------------------------
------------------------------
24ARSC Systems - Queues
- Invoke news queues on Midnight to see current
queues - PBS Pro used for scheduling
Queue Min Max Max
Procs Procs Walltime Notes
--------------- ----- ----- ---------
------------ standard
1 16 840000 See (A)
17 256 160000
257 512 120000 challenge
1 16 960000 See (B)
17 256 960000 See (C)
257 512 120000 background
1 512 120000 debug 1
32 003000 See (D)
25ARSC Systems - Help
- Each system has a Getting Started
- www.arsc.edu/support/howtos/usingsun.html
- www.arsc.edu/support/howtos/usingp6x.html
- The HPC News email letter has many great tips and
suggestions - www.arsc.edu/support/news/HPCnews.shtml
- Help Desk Consultants are quite talented and able
to help with a variety of issues
26Contact Information
- ARSC Help Desk
- Mon - Fri
- 0800 - 1700 AK
- 907-450-8602
- consult_at_arsc.edu
- www.arsc.edu/support/support.html
27Questions?
Michelle Phillips (2007 Quest) Finished 4th in
2008 Quest (11 days, 10 hrs, 21 mins) Photo
by Carsten Thies Yukonquest.com