Title: Parallel Programming Concepts
1Parallel Programming Concepts
Project presentation for Software Development
Presented by Vasil Lalov
Course CS564, Fall 2007 Bowling Green State
University Bowling Green, Ohio
2Parallel Programming Concepts
Presentation Overview
- What is Parallel Programming?
- Why do we need it?
- Development Frameworks
- Code Examples
- Conclussion
3Parallel Programming Concepts
What is parallel programming?
Definition A parallel program is one that runs
on multiple processors (CPU's) simultaneously.
Details
- Processors can be located on the same host
(machine) - Processors can be located on different hosts
- Processors can have different clock speeds,
caches and architectures
4Parallel Programming Concepts
Why do we need Parallel Programming
- We, as a humanity, are generating ever more
complex problems - Weather Forecasting
- Genome Research
- Multimedia Processing
- Computer Simulations and Modelling
5Parallel Programming Concepts
Why do we need Parallel Programming
- We want our results (program output) in real time
or close to real time - Huge data takes long time to process
- Complex algorythms are very compute intensive
Examples 1. We want our hurricane forecasts to
be given well in advance before the hurricane
actually reaches landmass 2. We want to process
incoming program data quickly so that there are
no data storage problems 3. We want to see the
result of a car crash computer simulation each
time a new component is added to the car structure
6Parallel Programming Concepts
Example of a typical computer program
Application
Application
Process
Data
Results
7Parallel Programming Concepts
An Example of a primitive parallel program
Application
Application
MasterProcess
Data
Processor
Processor
Data
MasterProcess
Results
8Parallel Programming Concepts
Execution Environment Requirements
1. The output MUST be the same as if ran by a
sequential program 2. Processes often need to
communicate with each other 3. Processes need
synchonization 4. Data must be distrubited
safely across CPUs 5. Output from all CPUs must
somehow be combined.
9Parallel Programming Concepts
Parallel Programming Frameworks
The development of proper programming frameworks
is lacking significantly compared to regular
programming frameworks. For years, the best way
to speed up software was to make faster
hardware. Current parallel programming
frameworks require a lot more effort from the
programmer. Novice programmers generally generate
low quality parallel code which provides almost
no speed up over sequential code.
10Parallel Programming Concepts
Parallel Programming Frameworks
However
Current CPU manufacturers (Intel, AMD) are
changing strategies for increasing hardware speed
as they push the limits of nanotechnology. Dual
core and multi core CPUs are the obvious choice
for speeding up the hardware as you cannot
increase the operating frequency
indefinitely. Major software vendors (Microsoft,
Apple, Linux vendors) are working hard on writing
software that takes advantage of multi core CPUs
that are capable of running parallel programs.
11Parallel Programming Concepts
Parallel Programming Frameworks
Also
Many research organizations and educational
institutions are developing needs for high
throughput computing. Businesses are demanding
higher productivity from their employees and IT
infrastructure overall
All this leads to
A high demand for quality parallel programming
frameworks. A very high demand for skilled
programmers who can take advantage of those
frameworks.
12Parallel Programming Concepts
Parallel Programming Frameworks
Most popular parallel programming frameworks 1.
Message Passing Interface (MPI)? 2. OpenMP
13Parallel Programming Concepts
MPI Programming Framework
- Designed for running only one PROCESS per CPU.
- CPUs are located on different hosts (machines)
- Parallelism is accomplished via special function
calls - Interprocess communication is slower compared to
OpenMP - Very, very scalable!
- Harder to learn and use
14Parallel Programming Concepts
MPI Programming Examples
include mpi.h // lt- Include file for
MPI main (int argc, char argv )
MPI_Init(argc, argv) // lt-
Initialization of MPI environment . . MPI_Comm_ran
k(MPI_COMM_WORLD, myrank) if (myrank
0)? master() // lt- Master
Routine else slave() // lt- Slave
Routine . . MPI_Finalize() // lt- Finalizing
the MPI environment
15Parallel Programming Concepts
MPI Programming Examples
- In MPI Framework, usually there is a MASTER
process responsible for - Preparing the data for processing
- Break up the job in smaller pieces
- Distribute work between the worker processes
- Enforce process synchronization
- Collect all sub-results from worker processes
- Combine all sub-results into final result and
provide output to user
16Parallel Programming Concepts
MPI Programming Examples
- In MPI Framework, usually there is a WORKER
process responsible for - Obtain data from the MASTER process
- Perform necessary computations on allocated data
portion - Communicate with other WORKER processes if
needed - Return processed data back to the MASTER process
17Parallel Programming Concepts
OpenMP Programming Framework
- Designed for running only one THREAD per CPU
core. - CPUs are located on the host (even same die)
- Parallelism is accomplished via compiler
directives - Interprocess communication is much faster
compared to MPI - Poor scalability
- Easier to learn and use
18Parallel Programming Concepts
OpenMP Programming Examples
OpenMP directives are contained in pragma
statements. The OpenMP pragma statements have
the format pragma omp directive_name
... where omp is an OpenMP keyword.
19Parallel Programming Concepts
OpenMP Programming Examples
pragma omp parallel private(x,
num_threads)? x omp_get_thread_num() num_th
reads omp_get_num_threads() ax
10num_threads
omp_get_num_threads() returns number of threads
that are currently being used in parallel
directive omp_get_thread_num() returns thread
number (an integer from 0 to omp_get_num_threads()
- 1 where thread 0 is the master thread).
20Parallel Programming Concepts
OpenMP Programming Examples
pragma omp for for ( ) some code
Causes the for loop to be divided into parts and
these parts are shared among threads in the team.
The for loop must be of a simple form.
21Parallel Programming Concepts
Conclusions
- Parallel programming is a relatively young field
in computer science - A lot of research and development is currently
undergoing in this field - Skilled parallel programmers are hard to come by
and the demand will explode in the next decade. - We are currently observing a major shift of
computer architectures as we reach hardware
limits
22Parallel Programming Concepts
Thanks
Thank you for your time! Questions?