Title: The Changing Business Case for Supercomputing: An Industrial Perspective
1The Changing Business Case for Supercomputing An
Industrial Perspective
- Dr. Kenneth W. Neves
- Senior Technical Fellow
- Manager, Computer Science
- Seattle, WA
2Topics
- Indicators of market health and viability of
supercomputing - 1970
- late 1980s early 1990s
- Today
- Boeing high performance computing challenges
- Production computing
- Research computing
- Enterprise-wide computing
- Product visualization
- Conclusions Common research issues
- technical
- system
3Key Factors to Monitor
- Market for high performance computers
- Applications - the need
- Computer Power
- Computer Architecture
4Concepts of Key Factors
Big Market - sustained by commercial sales and
not just research
Very Novel
Big Market
Pacing
100X
Pacing - the next generation applications are
fundamental to business success
100x - Supercomputers offer 2 orders of magnitude
over next best alternative
Very novel - to achieve per- formance the
architecture requires large modifications of
existing software
Market
Need
SC Power
Architecture
51970s
- The market was new
- The requirements were scientific and led directly
to improved products, research, and understanding - The performance over the next most powerful
market-based machines was enormous - Required vector computing understanding, yet most
applications had long loops to exploit - oil, aero, structures, weather
Very Novel
Big M
Pacing
100X
6Late 1980s- Early 1990s
Very Novel
Big
Pacing
100X
- Market split into vector parallel and highly
parallel - Vector parallel was well understood with a base
of applications - The performance of vector machines relative to
other alternatives began to wane - micros turned new generations every 18 months
- custom hardware lost edge
- The new breed parallel computers lacked software
base and were very novel and hard to use
Vector
Very Novel
Big
Pacing
100X
Parallel
7Now Facts of Life
- Today, SC companies have all but died or been
absorbed into a more commodity market - Micros dominate
- Cutting edge computational research MUST resort
to highly parallel machines (separates the men
from the boys) - The cost of novel architectures both in
hardware and software has thinned the market - Many supercomputer users of old are workstation
users today
Like 1970
Parallel
8Boeing Applications
- CAD/CAM (billion dollar investment)
- Product Data Management and Manufacturing
Resource Control (multi-billion dollar
investment) - Scientific Computing (important, but
multi-million dollar investment) that tends to be
cyclic - Super Computing Problems, e.g.,
- CFD highly separated flows
- multi-disciplinary optimization
- constrained design
- electromagnetics
9High-end Computing Activity
- Production computing
- Scientific research computing
- Enterprise-wide computing
- Product visualization
10Production Computing
- Requires repeatable, controllable process
- Can be big problems (CFD for cruise wing design,
structural analysis) - Done on more ordinary architectures (Cray T-90)
- Migration from central computing
- as workstation and server capability improved
many of the central users migrate to more
affordable environments - department level supercomputers
- application dedicated platforms (can be novel
architectures, but not shared with many users) - secret computing
11Scientific Research Computing
- Grand challenge problems are often
multidisciplinary, can involve optimization - Often offer opportunity for macro-level
parallelism - Airfoil Constrained Optimization
0.1
12Unconstrained
13With Manufacturing Constraints
14Factory Modeling
Models physics of metal cutting
15Enterprise-wide Computing
- Distributed data
- 700 terabytes
- 20 business units
- secure, reliable, coherent
- Parallel SMP servers
- Oracle as middleware for 4 major applications
- Re-engineering of 315 legacy applications
- 50,000 users world wide (not including
subcontractors)
16Enterprise System Complexity
UFS File Server
DCE Security Server
Sequent Clusters
Master NIS
Scheduling Server
BNN Token Ring
Data Center FDDI Ring
Vital Production Systems
NFS Cluster (ServiceGuard)
Utility/Method Servers Clusters (ServiceGuard)
Routers
NT Resource Server (S3, Print)
Routers
NT WINS and MAD
NIS
Campus Server Room FDDI Ring
Router
DHCP Server
Application Servers (BaaN, Cimlinc,
ShopView Capp, Linkage, Web)
STAC Servers
DNS/NFS Cluster (ServiceGuard)
Switches
Printers Workstations
17Product Visualization
Machining from CAD Generative Design Neural
Network design retrieval System complexity rivals
enterprise wide computing
ALSO
18Research Issues
- Goal The network is the computer
- Power Grid (NASA term)
- computing resources are managed like a power
system - data movement is minimized, access time is
minimized - fail safe
- networking queuing, agent assisted
- Threads maintained
- Synchronization of process managed by middleware
rather than individuals - data authentication and time stamping for
coherency - Parallel data based performance (unsolved
problem) - Scientific computing approach, but applied to new
application areas of the enterprise
19Old Style Performance Enhancement
CPU Time
Analysis Application
20New Style Performance Enhancement
CPU Time
21What Questions to Ask
22What Questions to Ask
23NASA Power Grid Concept
24System Performance Pyramid
Storage Systems
25Conclusions
- Older performance improvement techniques are
fundamental and necessary, but not sufficient - New system level attack on performance and
scalability is needed - need to address response time
- system throughput (of the entire process)
- Looking at performance for (system level)
analysis is similar to enterprise-wide computing - Scientists, hardware vendors from the SC
community, computer scientists, and
enterprise-wide system developers need to
collaborate - The traditional supercomputing community needs to
diversify its interests!