Title: CS 240A Applied Parallel Computing
1CS 240AApplied Parallel Computing
- John R. Gilbert
- gilbert_at_cs.ucsb.edu
- http//www.cs.ucsb.edu/cs240a
- Thanks to Kathy Yelick and Jim Demmel at UCB for
use of their slides.
2Why do we need powerful computers?
3Tunnel Vision by Experts
- I think there is a world market for maybe five
computers. - Thomas Watson, chairman of IBM, 1943.
- There is no reason for any individual to have a
computer in their home - Ken Olson, president and founder of Digital
Equipment Corporation, 1977. - 640K of memory ought to be enough for
anybody. - Bill Gates, chairman of Microsoft,1981.
Slide source Warfield et al.
4 Simulation The Third Pillar of Science
- Traditional scientific and engineering paradigm
- Do theory or paper design.
- Perform experiments or build system.
- Limitations
- Too difficult -- build large wind tunnels.
- Too expensive -- build a throw-away passenger
jet. - Too slow -- wait for climate or galactic
evolution. - Too dangerous -- weapons, drug design, climate
experiments. - Computational science paradigm
- Use high performance computer systems to simulate
the phenomenon. - Based on known physical laws and efficient
numerical methods.
5Some Challenging Computations
- Science
- Global climate modeling
- Astrophysical modeling
- Biology genomics protein folding drug design
- Computational Chemistry
- Computational Material Sciences and Nanosciences
- Engineering
- Crash simulation
- Semiconductor design
- Earthquake and structural modeling
- Computation fluid dynamics (airplane design)
- Combustion (engine design)
- Business
- Financial and economic modeling
- Transaction processing, web services and search
engines - Defense
- Nuclear weapons -- test by simulation
- Cryptography
6See Mark Adams SC2004 talk slides
- http//www.columbia.edu/ma2325/SC2004.ppt.htm
7Units of Measure in HPC
- High Performance Computing (HPC) units are
- Flops floating point operations
- Flop/s floating point operations per second
- Bytes size of data (double precision floating
point number is 8) - Millions, billions, trillions
- Mega Mflop/s 106 flop/sec Mbyte 106 byte
- (also 220 1048576)
- Giga Gflop/s 109 flop/sec Gbyte 109 byte
- (also 230 1073741824)
- Tera Tflop/s 1012 flop/sec Tbyte 1012 byte
- (also 240 10995211627776)
- Peta Pflop/s 1015 flop/sec Pbyte 1015 byte
- (also 250 1125899906842624)
- Exa Eflop/s 1018 flop/sec Ebyte 1018 byte
-
8See TOP 500 List
- http//www.top500.org/lists/2005/11/basic
9Why are powerful computers parallel?
10Technology Trends Microprocessor Capacity
Moores Law
Moores Law transistors/chip doubles every
1.5 years
Gordon Moore (co-founder of Intel) predicted in
1965 that the transistor density of semiconductor
chips would double roughly every 18 months.
Microprocessors have become smaller, denser, and
more powerful.
Slide source Jack Dongarra
11How fast can a serial computer be?
1 Tflop 1 TB sequential machine
r .3 mm
- Consider the 1 Tflop sequential machine
- data must travel some distance, r, to get from
memory to CPU - to get 1 data element per cycle, this means 1012
times per second at the speed of light, c 3e8
m/s - so r lt c/1012 .3 mm
- Now put 1 TB of storage in a .3 mm2 area
- each word occupies 3 Angstroms2, the size of a
small atom
12Scaling microprocessors
- What happens when feature size shrinks by a
factor of x? - Clock rate goes up by x
- actually a little less
- Transistors per unit area goes up by x2
- Die size also tends to increase
- typically another factor of x
- Raw computing power of the chip goes up by x4 !
- of which x3 is devoted either to parallelism or
locality
13Automatic Parallelism in Modern Machines
- Bit level parallelism
- within floating point operations, etc.
- Instruction level parallelism
- multiple instructions execute per clock cycle
- Memory system parallelism
- overlap of memory operations with computation
- OS parallelism
- multiple jobs run in parallel on commodity SMPs
There are limits to all of these -- for very high
performance, user must identify, schedule and
coordinate parallel tasks
14Number of transistors per processor chip
15Number of transistors per processor chip
Instruction-Level Parallelism
Thread-Level Parallelism?
Bit-Level Parallelism
16A generic parallel architecture
P
P
P
P
M
M
M
M
Interconnection Network
Memory
Where is the memory physically located?
17Issues in parallel performance
18Sequential performance is often the hardest issue
in parallel performance!
19Avoiding data movement Reuse and locality
Conventional Storage Hierarchy
Proc
Proc
Proc
Cache
Cache
Cache
L2 Cache
L2 Cache
L2 Cache
L3 Cache
L3 Cache
L3 Cache
potential interconnects
Memory
Memory
Memory
- Large memories are slow, fast memories are small
- Parallel processors, collectively, have large,
fast cache - the slow accesses to remote data we call
communication - Algorithm should do most work on local data
20Balancing the Load
- Load imbalance is the time that some processors
in the system are idle due to - insufficient parallelism (during that phase)
- unequal size tasks
- Examples of the latter
- adapting to interesting parts of a domain
- tree-structured computations
- fundamentally unstructured problems
- Algorithm needs to balance load
21Finding Enough Parallelism Amdahls Law
- Suppose only part of an application seems
parallel - Amdahls law
- Let s be the fraction of work done sequentially,
so (1-s) is the fraction parallelizable - Let P number of processors
Speedup(P) Time(1)/Time(P)
lt 1/(s (1-s)/P) lt 1/s
- Even if the parallel part speeds up perfectly,
the sequential part limits overall performance.
22Issue Measuring parallel performance in theory
- Sequential complexity measures
- Execution time
- Memory
- Asymptotically, as a function of problem size n
- Parallel complexity measures
- Same as above, plus some combination of
- Number of processors
- Amount of communication
- speedup time(1 processor) / time(p
processors) - scaled speedup problem size grows with
processors - and so on
23Issue Measuring parallel performance in practice
- Tools (not as good as youd like)
- Reproducibility (harder than you think)
- Environment issues (who else is on the
machine?) - Performance is the goal of parallel computing
- (Will also be a significant part of your homework
grades) - But its really hard to define and measure!
24Parallel programming languages
25Parallel programming languages
- Many have been invented much less consensus
on best languages than in the sequential world - Could have a whole course on them well look at
a few just for fun (including our research
project MatlabP) - Main languages youll use in the course
- C with MPI (alternative Fortran with MPI)
- UPC Universal Parallel C (alternative CAF
Co-Array Fortran) - Lots more on languages Wednesday (Viral)
26Usability and Productivity
- See Horst Simon talk
- See HPCS program
- Classroom experiment see Markov model slides
27Course bureacracy
- Read course web page http//www.cs.ucsb.edu/cs240
a/homepage.html - Google discussion group
- Account signup
- CS Department Cluster
- DataStar, San Diego Supercomputing Center
- Cray X1, AHPCRC Minneapolis