Using SCTP to hide latency in MPI programs - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

Using SCTP to hide latency in MPI programs

Description:

Using SCTP to hide latency in MPI programs. Brad Penoff, H. Kamal, M. Tsai, E. Vong, A. Wagner ... Some latency hiding techniques increase performance ... – PowerPoint PPT presentation

Number of Views:40
Avg rating:3.0/5.0
Slides: 20
Provided by: pen45
Category:
Tags: mpi | sctp | hide | latency | programs | using

less

Transcript and Presenter's Notes

Title: Using SCTP to hide latency in MPI programs


1
Using SCTP to hide latency in MPI programs
  • Brad Penoff, H. Kamal, M. Tsai, E. Vong, A.
    Wagner
  • Department of Computer Science
  • University of British Columbia
  • Vancouver, Canada

Distributed Systems Group
April 25, 2006
2
Overview
  • Motivation
  • SCTP
  • Processor Farm Implementation
  • Examples

3
Motivation
  • Extend the operability of MPI message-passing
    applications to WAN
  • High latency (milliseconds)
  • Congestion and loss
  • Standard IP transport mechanisms
  • Heterogeneous environment
  • Why?
  • Suitable for some compute-intensive applications
  • Interoperability inter-cluster
  • Distributed resources

4
What is SCTP?
  • Stream Control Transmission Protocol
  • IETF standardized IP transport protocol
  • Message oriented like UDP
  • Reliable, in-order delivery like TCP but with
    multiple streams
  • Available on most major operating systems

5
Why SCTP?
  • Added resilience
  • Multi-streaming
  • Improved congestion control (e.g. built-in SACK)
  • Multi-homing
  • Added security
  • Message oriented

6
SCTP-based MPI for WANs
  • Close match to MPI
  • Mapping tags to streams avoids head-of-line
    blocking
  • Automatically leverage other SCTP features

7
Using SCTP for MPI applications
  • Automatic
  • SCTP helps to reduces the effect of segment loss
  • Need to change applications
  • Use of tags to identify independent message
    streams
  • Overlap computation and communication
    (non-blocking communication)
  • Avoid head of line blocking

8
MPI Applications for WAN?
  • Parallel task farms (pfarms)
  • Common strategy for large number of independent
    tasks
  • Typical properties
  • Request driven, process tasks at the rate of the
    worker
  • Dynamically load-balanced
  • Dynamic processes
  • Centralized or decentralized, if necessary
  • Provided as a Template

Manager
Workers
e
s
t
e
q
u
R
Create
Do Task
Tasks
k
t
a
s
r
e
s
u
l
t
Process
Results
9
Developed Pfarm Template
  • Small API
  • createTask, doTask, processResult
  • Provided tunable parameters
  • Number of outstanding requests
  • Number of available buffers/worker
  • Number of tasks/request
  • Managed MPI non-blocking communication

10
Ideal case
11
Unable to hide latency
12
Task buffering
buffer requests to hide latency
Varying task times and varying network times (RTT)
13
Task buffering
buffer requests to hide latency
Varying task times and varying network times (RTT)
14
Program template
15
Examples
  • Robust correlation matrix computation
  • Large regular matrix computation (our own
    program)
  • Directly linked to template
  • mpiBLAST
  • Parallel version of popular bioinformatics tool
    (existing program)
  • Integrated template into program

16
mpiBLAST
80ms
40ms
0ms
20ms
17
Conclusions. What did we discover?
  • Some latency hiding techniques increase
    performance regardless of transport
  • SCTP-based MPI handles latency/loss better than
    TCP in real applications
  • Requires application changes to see full benefits
  • Non-blocking
  • Multiple tags to utilize streams
  • Head of line blocking in real applications

18
Thank you!
  • More information about our work is at
  • http//www.cs.ubc.ca/labs/dsg/mpi-sctp/

Or Google sctp mpi
19
Upcoming annual SCTP Interop
  • July 30 Aug 4, 2006 to be held at UBC
  • Vendors and implementers test their stacks
  • Performance
  • Interoperability
Write a Comment
User Comments (0)
About PowerShow.com