Compute Resource Systems - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Compute Resource Systems

Description:

http://www.npaci.edu ... 100 TB mass store at LES: largest based on HPSS 100 GB data sets ... Staged installation. 1/4 teraflops this summer. Full ... – PowerPoint PPT presentation

Number of Views:37
Avg rating:3.0/5.0
Slides: 14
Provided by: npa5
Category:

less

Transcript and Presenter's Notes

Title: Compute Resource Systems


1
Compute Resource Systems
  • Wayne Pfeiffer
  • Deputy Director
  • NPACI SDSC
  • NPACI All-Hands Meeting
  • January 28, 1999

http//www.npaci.edu/Resources/index.html
2
NPACIs balanced complement ofhigh-end resources
for FY99
  • Compute resources (LES 4 partners)
  • Teraflops system at LES first for academia
  • Complementary systems at partner sites
  • Data resources (LES 10 partners)
  • gt100 TB mass store at LES largest based on HPSS
  • gt100 GB data sets at partner sites
  • Network resources (LES all partners)
  • gt100 Mbps access to compute data resources
  • Communications backbone for metacomputing

3
Compute resources are at 5 sites
U Michigan
UC Berkeley
Caltech
SDSC
U Texas
4
Complementary rolesof 5 compute resource sites
  • Leading-edge site (SDSC)
  • Very high-performance resources, including
    teraflops system
  • Mid-range sites (U Texas U Michigan)
  • Smaller systems compatible with LES
  • Support for apps with limited scalability,
    large-memory jobs, apps development, OS testing,
    education
  • Alternate architecture research systems
    (Caltech, UC Berkeley, SDSC)
  • Support for leading-edge apps, thrusts,
    evaluation

5
Evolution of allocable compute servers
Quarters are by fiscal year.
6
Nationally-allocable compute resourcesfor NPACI
in FY98 FY99
FY99 changes are in bold.
7
Research compute resourcesat UCSD, Caltech,
UCB in FY98 FY99
FY99 changes are in bold.
8
First Tera MTA is at SDSC
9
IBM selected as first NPACI teraflops vendor
  • Briefings (with partners) from 5 vendors
  • Compaq/Digital, HP, IBM, SGI, Sun
  • Selection endorsed by Executive Committee
  • Strong commitment to high end by IBM
  • Technology being developed through ASCI
  • SDSC to have largest system with next-generation
    nodes
  • Growing partnership with IBM

10
SP teraflops system coming this year
  • Cluster architecture
  • gt1,000 Power3 processors at gt200 MHz
  • 8-processor, next-generation SMP nodes
  • gt500 GB of memory initially, with upgrade later
  • Current generation switch initially, with upgrade
    later
  • Staged installation
  • 1/4 teraflops this summer
  • Full teraflops this fall
  • Memory switch upgrade next year

11
Reference plan for multi-teraflops computing
12
More aggressive planfor multi-teraflops computing
13
Features of more aggressive planfor
multi-teraflops computing
  • More rapid deployment of multi-teraflops system
  • Current 1 Tflops in FY99 -gt 4 Tflops in FY02
  • Rapid 1 Tf in FY99 -gt 3.7 Tf in FY00 -gt 13 Tf in
    FY01
  • Fits within existing SDSC building
  • Includes balanced mass storage system
  • Current 1 PB in FY02 -gt Rapid 12 PB in FY02
  • Re-use of equipment
  • Modest investment in alternate architecture
  • Additional cost at leading-edge site of about
    60M per year
Write a Comment
User Comments (0)
About PowerShow.com