Title: Parallel Virtual Machine
1Parallel Virtual Machine
- Presenter Anthony Vacca
- Tuesday November 18 2003
2PVM Introduction
.But can it be improved?
3PVM IntroductionNow that I want it, teach me!
- You will come out with an idea of
- When parallelism is good to use.
- Some alternatives, and technologies.
- A conceptual PVM environment.
- How PVM works.
- Some examples.
4PVM Introduction PVM History
- PVM project began in summer of 1989
- Birthplace - Oak Ridge National Labs v1.0
- V2.0 created by Univ. OF Tennessee
- Release March 1991.
- V3.0 is the current standard
- Release in February 1993
- Continually upgraded and maintained.
5PVM IntroductionPVM Definition
- PVM uses the message-passing model to allow
programmers to distribute tasks across a wide
variety of computer types. A key concept in PVM
is that it makes a collection of computers appear
as one large virtual machine , hence its name.
6PVM IntroductionParallel computing Analysis.
- Some good questions to ask before you program
are - How fast do we need to compute?
- How fast can we communicate?
- How many PCs do we need for efficiency?
- Do we even need to use parallelism?
- Can everything be done in parallel?
- How the heck can we do this?
7PVM Introduction The Practical Uses
- Financial Institutions i.e. RBC, TD, CIBC
- Rewards for publishing quarterly profits.
- Cryptography
- Algorithms to crack codes quickly.
- Parallel Programs
- Any large computation that can be executed
concurrently. - Solving the Grand Challenge Problems!
- Become really rich!
8PVM Introduction Why Develop The Technology?
- Due to limitations such as speed and memory in
computing systems available many simulations
could not be completed with sufficient accuracy
and timeliness to be of interest. - From http//www.nhse.org/grand_challenge.html
- For a list of GRAND CHALLENGE FROBLEMS please
visit - http//ceee.rice.edu/Books/CS/chapter1/intro52.htm
l
9PVM Introduction What is it?
- Collection of computers.
- Heterogeneous computer mix
- Appears as one virtual machine.
- Transparently handles
- Message routing
- Data conversion
- Task Scheduling
10PVM Introduction Why Develop The Technology?
- Many small tasks solving one problem.
- Power becomes restricted.
- As problems grow, more computation power.
- Massively Parallel Processors are costly.
- Cost in the range of 10 million dollars.
- Cheaper.
- Reusable.
11Comparable Technologies
- MPP (Massively Parallel Processors)
- every processor is exactly like every other
- Capability
- Resources
- Software
- Communication speed
12Comparable Technologies cont
- With PVM ( Parallel Virtual Machines)
- A heterogeneous mix of PCs
- Vendors.
- Data Formats (always limitations).
- Computational Speeds.
- Machine Loads.
- Network Loads.
- Compilers.
13Comparable Technologies cont
- Positives of PVM vs. MPP
- computers available on a network may be made by
different vendors or architectures. - programmer can exploit a collection of networked
computers. - The PVM may itself be composed of parallel
computers. - By using existing hardware, the cost of this
computing can be very low.
14Comparable Technologies cont
- Negatives of PVM vs. MPP
- Data formats on different computers are often
incompatible. - Message-passing packages developed for
heterogeneous environments must make sure all the
computers understand the exchanged data. - The time it takes to send a message over the
network can vary depending on the network load
imposed by all the other network users.
15Comparable Technologies contDistributed
ComputingÂ
- Distributed ComputingÂ
- Network Connected.
- Organization can use high speeds LANS.
- Examples
- MPI the base is Communication Interface layers.
- Standard Message Passing library or routines.
- Eg PVM can be ported to MPI system.
16Comparable Technologies contParadigms Of
Communication
- Shared Memory Approach (others)
- Easier to program than message passing.
- Easier to port programs.
- Compilers use Auto-Parallelizing Option to
generate code that splits processing. - Negatives
- Slow for small executions.
- Finite amount of processors.
17Comparable Technologies contParadigms of
Communication
- Message Passing (PVM)
- Tasks use own memory during computation.
- Tasks reside on same physical machine or across n
machines. - Tasks exchange data by sending and receiving
messages. - Data transfer usually requires cooperative
operations to be performed by each process. - Comprise a library of subroutines that are
imbedded in source code.
18Building The Conceptual Model What is in this
Environment
- User-configured host pool
- Translucent access to hardware
- Process-based computation
- Multiprocessor support
- Heterogeneity support
- Explicit message-passing
19Building The Conceptual ModelUser-configured
host pool
- Computational tasks are selected
- Single or multiprocessors (including
shared-memory and distributed-memory computers - host pool may be altered
- adding machines, deleting machines
- Good for fault tolerance.
20Building The Conceptual Model Translucent access
to hardware
- Application programs view environment as
- collection of virtual processing
- Exploit capabilities of specific machines in the
host pool
21Building The Conceptual ModelProcess-based
computation
- Parallelism is done by the use of tasks.
- Independent sequential thread of control.
- Alternates between communication and computation.
- No process-to-processor mapping is implied.
22Building The Conceptual ModelMultiprocessor
support
- Does it interact with the OS?
- Some vendors may supply their own support for PVM
on their systems (e.g. REDHAT). - PVM uses native msg-passing facilities
- take advantage of the underlying hardware.
- Imagine a market where the usage of older
hardware can still be useful.
23Building The Conceptual ModelHeterogeneity
support
- Able to Communicate through different
- Machines.
- Networks.
- E.g.
- Aces motherboard (different models)
- Heterogeneous system and / or architecture types.
24Building The Conceptual ModelExplicit
message-passing
- Collections of computational tasks
- Processor performs a part of an application's
workload. - Cooperate by explicitly sending and receiving
messages. - Message size is limited by amount of available
memory. - Messages can contain more than one data type.
25Building The Conceptual ModelSummary
- Application consists of several tasks
- Each task is responsible for a part of the
application's computational workload. - E.g. input, problem setup, solution, output, and
display
26Using the PVM Environment
- Before we run programs we must take a look at
- Communication Paradigms.
- pvmd3/pvmd Damon.
- PVM Library.
- Message Passing.
- Task Identifiers.
- Programming Languages.
- See it in action.
27Library of PVM interface routines
- Contains functions needed for cooperation between
tasks. - Message passing.
- Spawning processes.
- Coordinating tasks.
- Modifying the virtual machine.
28pvmd Daemon
- resides on all the computers making up the
virtual machine - e.g. mail program that runs in the background
- Creates virtual machine by starting up PVM.
- started from a Unix prompt on any of the hosts.
- Multiple users can configure overlapping virtual
machines. - User can execute several PVM applications
simultaneously.
29PVM Message Passing
- 3 step process in sending a message in C
- Send buffer initialized
- pvm_initsend()
- pvm_mkbuf().
- Message packed into send buffer
- In C pvm_pk() routines
- Message is sent
- Single - pvm_send() routine
- Multicast - pvm_mcast() routine
30Task Identifiers
- All PVM tasks are identified by task identifier
(TID). - Messages sent to and received from TIDs
- TIDs must be unique in environment
- Supplied by the local pvmd.
31Supported Languages
- Main languages are
- C
- C
- Fortran
32Now, lets see this Environment
- The Enviroment name is the Matrix
- Blue Pill you can leave
- CPU 1 Morpheus
- P2 350 MHz, 256 RAM, Runs Redhat
- CPU 2 Neo
- P2 333 MHz, 500 RAM, Runs Redhat
- CPU 3 Trinity
- P2 350 MHz, 256 RAM, Runs Redhat
The Red Pill and go on
33Example of Master PVM program hello.c
- include pvm3.h
- main()
- int cc, tid, msgtag
- char buf100
- printf("i'm tx\n", pvm_mytid())
- cc pvm_spawn
- ("hello_other",(char)0,0,"", 1,tid)
- if (cc 1)
- msgtag 1
- pvm_recv(tid, msgtag)
- pvm_upkstr(buf)
- printf("from tx s\n", tid, buf)
-
- pvm_exit()
-
/Includes the PVM library/
/Prints Current TID/
/Initiates copy of program hello_other/
/ If Spawn was successful/
/Blocks until msg received/
/Unpacks the message/
/Disconnects program from the system/
34Example of Slave PVM program hello_other.c
- main()
- int ptid, msgtag
- char buf100
- ptid pvm_parent()
- strcpy(buf, "hello, world from ")
- gethostname(buf strlen(buf), 64) msgtag 1
- pvm_initsend(PvmDataDefault)
- pvm_pkstr(buf)
- pvm_send(ptid, msgtag) 4
- pvm_exit()
-
/obtain task id of the master /
/Puts msg in buf/
/grabs more information from system/
/initialize the send buffer/
/place the msg in the send buffer/
/Sends the message/
/Disconnects program from the system/
35Another ExampleMaster/Slave
- One task is the master
- The master has n slaves
- Include its self as a slave
- Passes information
- Does computation
- Passes back information
36Questions or Comments
37Thank You
- Any further questions or concerns please email me
at - av00ae_at_cosc.brocku.ca