Infiniband - PowerPoint PPT Presentation

1 / 13
About This Presentation
Title:

Infiniband

Description:

InfiniBand Architecture defines a new interconnect technology for servers that ... Bypasses CPU and Kernel. Provides RDMA functionality ... – PowerPoint PPT presentation

Number of Views:281
Avg rating:3.0/5.0
Slides: 14
Provided by: tri5560
Category:

less

Transcript and Presenter's Notes

Title: Infiniband


1
Infiniband
  • Bart Taylor

2
What it is
InfiniBand Architecture defines a new
interconnect technology for servers that changes
the way data centers will be built, deployed and
managed. By creating a centralized I/O fabric,
InfiniBand Architecture enables greater server
performance and design density while creating
data center solutions that offer greater
reliability and performance scalability.
InfiniBand technology is based upon a
channel-based switched fabric point-to-point
architecture. --www.infinibandta.org
3
History
  • Infiniband is the result of a merger of two
    competing designs for an inexpensive high-speed
    network.
  • Future I/O combined with Next Generation I/O form
    what we know as Infiniband.
  • Future I/O was being developed by Compaq, IBM,
    and HP
  • Next Generation I/O was being developed by Intel,
    Microsoft, and Sun Microsystems
  • Infiniband Trade Association maintains the
    specification

4
The Basic Idea
  • High speed, low latency data transport
  • Bidirectional serial bus
  • Switched fabric topology
  • Several devices communicate at once
  • Data transferred in packets that together form
    messages
  • Messages are direct memory access, channel
    send/receive, or mulitcast
  • Host Channnel Adapters (HCAs) are deployed on PCI
    cards

5
Main Features
  • Low Latency Messaging lt 6 microseconds
  • Highly Scalable Tens of thousands of nodes
  • Bandwidth 3 levels of link performance
  • 2.5 Gbps
  • 10 Gbps
  • 30 Gbps
  • Allows multiple fabrics on a single cable
  • Up to 8 virtual lanes per link
  • No interdependency between different traffic flows

6
Physical Devices
  • Standard copper cabling
  • Max distance of 17 meters
  • Fiber-optic cabling
  • Max distance of 10 kilometers
  • Host Channnel Adapters on PCI cards
  • PCI, PCI-X, PCI-Express
  • InfiniBand Switches
  • 10Gbps non-blocking, per port
  • Easily cascadable

7
Host Channel Adapters
  • Standard PCI
  • 133 MBps
  • PCI 2.2 - 533 MBps
  • PCI-X
  • 1066 MBps
  • PCI-X 2 - 2133 MBps
  • PCI-Express
  • x1 5Gbps
  • x4 20Gbps
  • x8 40Gbps
  • x16 80Gbps

8
DAFS
  • Direct Access File System
  • Protocol for file storage and access
  • Data transferred as logical files, not physical
    storage blocks
  • Transferred directly from storage to client
  • Bypasses CPU and Kernel
  • Provides RDMA functionality
  • Uses the Virtual Interface (VI) architecture
  • Developed by Microsoft, Intel, and Compaq in 1996

9
RDMA
10
TCP/IP Packet Overhead
11
Latency Comparison
  • Standard Ethernet TCP/IP Driver
  • 80 to 100 microseconds latency
  • Standard Ethernet Dell NIC with MPICH over TCP/IP
  • 65 microseconds latency
  • Infiniband 4X with MPI Driver
  • 6 microseconds
  • Myrinet
  • 6 microseconds
  • Quadrics
  • 3 microseconds

12
Latency Comparison
13
References
  • Infiniband Trade Association - www.infinibandta.or
    g
  • OpenIB Alliance - www.openib.org
  • TopSpin - www.topspin.com
  • Wikipedia - www.wikipedia.org
  • OReilly - www.oreillynet.com
  • Sourceforge - infiniband.sourceforge.net
  • Performance Comparison of MPI Implementations
    over InfiniBand, Myrinet and Quadrics. Computer
    and Information Science. Ohio State University.
    - nowlab.cis.ohio-state.edu/p
    rojects/mpi-iba/publication/sc03.pdf
Write a Comment
User Comments (0)
About PowerShow.com