Title: CSE8380 Parallel and Distributed Processing Presentation
 1CSE8380 Parallel and Distributed Processing 
Presentation 
- Hong Yue 
 - Department of Computer Science  Engineering 
 - Southern Methodist University 
 
  2Parallel Processing Multianalysis--- Compare 
Parallel Processing with Sequential Processing 
 3Why did I select this topic? 
 4Outline
- Definition 
 - Characteristics of Parallel Processing and 
Sequential Processing  - Implementation of Parallel Processing and 
Sequential Processing  - Performance of Parallel Processing and Sequential 
Processing  - Parallel Processing Evaluation 
 - Major Application of parallel processing 
 
  5Definition 
- Parallel Processing Definition 
 -  
 -  Parallel Processing refers to the 
simultaneous use of multiple processors to 
execute the same task in order to obtain faster 
results. These processors either communicate each 
other to solve a problem or work completely 
independent, under the control of another 
processor which divides the problem into a number 
of parts to other processors and collects results 
from them.  -  
 
  6Definition .2
- Sequential Processing Definition 
 -  
 -  Sequential processing refers to a computer 
architecture in which a single processor carries 
out a single task by series of operations in 
sequence. It is also called serial processing. 
  7Characteristics of Parallel Processing and 
Sequential Processing 
- Characteristics of Parallel Processing 
 -  
 -  ? Each processor can perform tasks 
 -  concurrently. 
 -  ? Tasks may need to be synchronized. 
 -  ? Processors usually share resources, such as 
data, disks, and other devices.  
  8Characteristics of Parallel Processing and 
Sequential Processing .2
- Characteristics of Sequential Processing 
 -  
 -  ? Only one single processor performs task. 
 -  ? The single processor performs a single 
 -  task. 
 -  ? Task is executed in sequence. 
 
  9Implementation of parallel processing and 
sequential processing 
- Executing single task 
 -  In sequential processing, the task is 
executed as a single large task.  -  In parallel processing, the task is divided 
into multiple smaller tasks, and each component 
task is executed on a separate processor.  
  10Implementation of parallel processing and 
sequential processing.2 
 11Implementation of parallel processing and 
sequential processing .3
  12Implementation of parallel processing and 
sequential processing.4
- Executing multiple independent task 
 - ? In sequential processing, independent tasks 
compete for a single resource. Only task 1 runs 
without having to wait. Task 2 must wait until 
task 1 has completed task 3 must wait until 
tasks 1 and 2 have completed, and so on.  
  13Implementation of parallel processing and 
sequential processing .5
- Executing multiple independent task 
 - ? By contrast, in parallel processing, for 
example, a parallel server on a symmetric 
multiprocessor, more CPU power is assigned to the 
tasks. Each independent task executes immediately 
on its own processor no wait time is involved.  
  14Implementation of parallel processing and 
sequential processing .6
  15Implementation of parallel processing and 
sequential processing .7
  16Performance of parallel processing and sequential 
processing 
- Sequential Processing Performance 
 -  ? Take long time to execute task. 
 -  ? Cant handle too large task. 
 -  ? Cant handle large loads well. 
 -  ? Return is diminishing. 
 -  ? More increasingly expensive to make a 
single processor faster.  -  
 
  17Performance of parallel processing and sequential 
processing .2
- Solution 
 -  using parallel processing - use lots of 
relatively fast, cheap processors in parallel.  
  18Performance of parallel processing and sequential 
processing .3
- Parallel Processing Performance 
 -  
 -  ? Cheaper, in terms of price and performance. 
 -  ? Faster than equivalently expensive 
uniprocessor machines.  -  ? Scalable. The performance of a particular 
program may be improved by execution on a large 
machine.  -  
 
  19Performance of parallel processing and sequential 
processing .4
-  Parallel Processing Performance 
 -  ? Reliable. In theory if processors fail we 
can simply use others.  -  ? Can handle bigger problems. 
 -  ? Communicate with each other readily, 
important in calculations.  
  20Parallel Processing Evaluation 
-  Several ways to evaluate the parallel processing 
performance  -  ? Scale-up 
 -  ? Speedup 
 -  ? Efficiency 
 -  ? Overall solution time 
 -  ? Price/performance 
 -  
 -  
 
  21Parallel Processing Evaluation .2
- Scale-up 
 -  
 -  Scale-up is enhanced throughput, refers to 
the ability of a system n times larger to perform 
an n times larger job, in the same time period as 
the original system. With added hardware, a 
formula for scale-up holds the time constant, and 
measures the increased size of the job which can 
be done.  
  22Parallel Processing Evaluation .3
  23Parallel Processing Evaluation .4
- Scale-up measurement formula 
 
  24Parallel Processing Evaluation .5
- For example, if the uniprocessor system can 
process 100 transactions in a given amount of 
time, and the parallel system can process 200 
transactions in this amount of time, then the 
value of scale-up would be equal to 200/100  2.  - Value 2 indicates the ideal of linear scale-up 
when twice as much, hardware can process twice 
the data volume in the same amount of time.  
  25Parallel Processing Evaluation .6
- Speedup 
 -  Speedup, the improved response time, defined 
as the time it takes a program to execute in 
sequential (with one processor) divided by the 
time it takes to execute in parallel (with many 
processors). It can be achieved by two ways 
breaking up a large task into many small 
fragments and reducing wait time.  
  26Parallel Processing Evaluation .7
  27Parallel Processing Evaluation .8
- Speedup measurement formula 
 
  28Parallel Processing Evaluation .9
- For example, if the uniprocessor took 40 seconds 
to perform a task, and two parallel systems took 
20 seconds, then the value of speedup  40 / 20  
2.  - Value 2 indicates the ideal of linear speedup 
when twice as much, hardware can perform the same 
task in half the time.  
  29Parallel Processing Evaluation .10
Table 1 Scale-up and Speedup for Different Types 
of Workload 
 30Parallel Processing Evaluation .11
Figure 7 Linear and actual speedup 
 31Parallel Processing Evaluation .12
-  Amdahls Law 
 -  
 -  Amdahl's Law is a law governing the speedup 
of using parallel processors on a problem, versus 
using only one sequential processor. Amdahls law 
attempts to give a maximum bound for speedup from 
the nature of the algorithm 
  32Parallel Processing Evaluation .13
  33Parallel Processing Evaluation .14
Figure 8 Example speedup Amdahl  Gustafson 
 34Parallel Processing Evaluation .15
-  Gustafsons Law 
 -  
 -  If the size of a problem is scaled up as the 
number of processors increases, speedup very 
close to the ideal speedup is possible.  -  
 -  That is, a problem size is virtually never 
independent of the number of processors.  -  
 
  35Parallel Processing Evaluation .16
  36Parallel Processing Evaluation .17
- Efficiency 
 -  The relative efficiency can be a useful 
measure as to what percentage of a processors 
time is being spent in useful computation.  -  
 -  
 -  
 
  37Parallel Processing Evaluation .18
Figure 9 Optimum efficiency  actual efficiency 
 38Parallel Processing Evaluation .19
Figure 10 Optimum number of processors in actual 
speedup 
 39Parallel Processing Evaluation .20
- Problems in Parallel Processing 
 - Parallel processing is like a dogs walking on 
its hind legs. It is not done well, but you are 
surprised to find it done at all.  - ----Steve Fiddes (University of Bristol) 
 -  
 
  40Parallel Processing Evaluation .21
- Problems in Parallel Processing 
 -  ? Its software is heavily platform-dependent 
and has to be written for a specific machine.  -  
 -  ? It also requires a different, more 
difficult method of programming, since the 
software needs to appropriately, through 
algorithms, divide the work across each 
processor.  
  41Parallel Processing Evaluation .22
- Problems in Parallel Processing 
 -  ? There isn't a wide array of shrink-wrapped 
software ready for use with parallel machines.  -  ? Parallelization is problem-dependent and 
cannot be automated.  -  ? Speedup is not guaranteed. 
 -  
 
  42Parallel Processing Evaluation .23
- Solution 1 
 - ? Decide which architecture is most appropriate 
for a given application.  -  The characteristics of application should 
drive decision as to how it should be 
parallelized the form of the parallelization 
should then determine what kind of underlying 
system, both hardware and software, is best 
suited to running your parallelized application.  -  
 
  43Parallel Processing Evaluation .24
  44Major Applications of parallel processing
- Clustering 
 - ? Clustering is a form of parallel processing 
that takes a group of workstations connected 
together in a local-area network and applies 
middleware to make them act like a parallel 
machine.  
  45Major Applications of parallel processing .2
- Clustering 
 -  Clustering is a form of parallel processing 
that takes a group of workstations connected 
together in a local-area network and applies 
middleware to make them act like a parallel 
machine.  
  46Major Applications of parallel processing .3
- Clustering 
 -  ? Parallel processing using Linux Clusters 
can yield supercomputer performance for some 
programs that perform complex computations or 
operate on large data sets. And it can accomplish 
this task by using cheap hardware.  -  ? Clustering can be used at night when 
networks are idle, it is an inexpensive 
alternative to parallel-processing machines.  
  47Major Applications of parallel processing .4
- Clustering can work with two separate but similar 
implementations  -  ? A Parallel Virtual Machine (PVM), is an 
environment that allows messages to pass between 
computers as it would in an actual parallel 
machine.  -  ? A Message-Passing Interface (MPI), allows 
programmers to create message-passing parallel 
applications, using parallel input/output 
functions and dynamic process management.  
  48Reference
- Andrew Boucher, Parallel Machines 
 - Stephane vialle, Past and Future Parallelism 
Challenges to Encompass sequential Processor 
evolution 
  49The end