Title: Compute Resource Systems
1Compute Resource Systems
- Wayne Pfeiffer
- Deputy Director
- NPACI SDSC
- NPACI All-Hands Meeting
- January 28, 1999
http//www.npaci.edu/Resources/index.html
2NPACIs balanced complement ofhigh-end resources
for FY99
- Compute resources (LES 4 partners)
- Teraflops system at LES first for academia
- Complementary systems at partner sites
- Data resources (LES 10 partners)
- gt100 TB mass store at LES largest based on HPSS
- gt100 GB data sets at partner sites
- Network resources (LES all partners)
- gt100 Mbps access to compute data resources
- Communications backbone for metacomputing
3Compute resources are at 5 sites
U Michigan
UC Berkeley
Caltech
SDSC
U Texas
4Complementary rolesof 5 compute resource sites
- Leading-edge site (SDSC)
- Very high-performance resources, including
teraflops system - Mid-range sites (U Texas U Michigan)
- Smaller systems compatible with LES
- Support for apps with limited scalability,
large-memory jobs, apps development, OS testing,
education - Alternate architecture research systems
(Caltech, UC Berkeley, SDSC) - Support for leading-edge apps, thrusts,
evaluation
5Evolution of allocable compute servers
Quarters are by fiscal year.
6Nationally-allocable compute resourcesfor NPACI
in FY98 FY99
FY99 changes are in bold.
7Research compute resourcesat UCSD, Caltech,
UCB in FY98 FY99
FY99 changes are in bold.
8First Tera MTA is at SDSC
9IBM selected as first NPACI teraflops vendor
- Briefings (with partners) from 5 vendors
- Compaq/Digital, HP, IBM, SGI, Sun
- Selection endorsed by Executive Committee
- Strong commitment to high end by IBM
- Technology being developed through ASCI
- SDSC to have largest system with next-generation
nodes - Growing partnership with IBM
10SP teraflops system coming this year
- Cluster architecture
- gt1,000 Power3 processors at gt200 MHz
- 8-processor, next-generation SMP nodes
- gt500 GB of memory initially, with upgrade later
- Current generation switch initially, with upgrade
later - Staged installation
- 1/4 teraflops this summer
- Full teraflops this fall
- Memory switch upgrade next year
11Reference plan for multi-teraflops computing
12More aggressive planfor multi-teraflops computing
13Features of more aggressive planfor
multi-teraflops computing
- More rapid deployment of multi-teraflops system
- Current 1 Tflops in FY99 -gt 4 Tflops in FY02
- Rapid 1 Tf in FY99 -gt 3.7 Tf in FY00 -gt 13 Tf in
FY01 - Fits within existing SDSC building
- Includes balanced mass storage system
- Current 1 PB in FY02 -gt Rapid 12 PB in FY02
- Re-use of equipment
- Modest investment in alternate architecture
- Additional cost at leading-edge site of about
60M per year