Title: Infiniband
1Infiniband
2What it is
InfiniBand Architecture defines a new
interconnect technology for servers that changes
the way data centers will be built, deployed and
managed. By creating a centralized I/O fabric,
InfiniBand Architecture enables greater server
performance and design density while creating
data center solutions that offer greater
reliability and performance scalability.
InfiniBand technology is based upon a
channel-based switched fabric point-to-point
architecture. --www.infinibandta.org
3History
- Infiniband is the result of a merger of two
competing designs for an inexpensive high-speed
network. - Future I/O combined with Next Generation I/O form
what we know as Infiniband. - Future I/O was being developed by Compaq, IBM,
and HP - Next Generation I/O was being developed by Intel,
Microsoft, and Sun Microsystems - Infiniband Trade Association maintains the
specification
4The Basic Idea
- High speed, low latency data transport
- Bidirectional serial bus
- Switched fabric topology
- Several devices communicate at once
- Data transferred in packets that together form
messages - Messages are direct memory access, channel
send/receive, or mulitcast - Host Channnel Adapters (HCAs) are deployed on PCI
cards
5Main Features
- Low Latency Messaging lt 6 microseconds
- Highly Scalable Tens of thousands of nodes
- Bandwidth 3 levels of link performance
- 2.5 Gbps
- 10 Gbps
- 30 Gbps
- Allows multiple fabrics on a single cable
- Up to 8 virtual lanes per link
- No interdependency between different traffic flows
6Physical Devices
- Standard copper cabling
- Max distance of 17 meters
- Fiber-optic cabling
- Max distance of 10 kilometers
- Host Channnel Adapters on PCI cards
- PCI, PCI-X, PCI-Express
- InfiniBand Switches
- 10Gbps non-blocking, per port
- Easily cascadable
7Host Channel Adapters
- Standard PCI
- 133 MBps
- PCI 2.2 - 533 MBps
- PCI-X
- 1066 MBps
- PCI-X 2 - 2133 MBps
- PCI-Express
- x1 5Gbps
- x4 20Gbps
- x8 40Gbps
- x16 80Gbps
8DAFS
- Direct Access File System
- Protocol for file storage and access
- Data transferred as logical files, not physical
storage blocks - Transferred directly from storage to client
- Bypasses CPU and Kernel
- Provides RDMA functionality
- Uses the Virtual Interface (VI) architecture
- Developed by Microsoft, Intel, and Compaq in 1996
9RDMA
10TCP/IP Packet Overhead
11Latency Comparison
- Standard Ethernet TCP/IP Driver
- 80 to 100 microseconds latency
- Standard Ethernet Dell NIC with MPICH over TCP/IP
- 65 microseconds latency
- Infiniband 4X with MPI Driver
- 6 microseconds
- Myrinet
- 6 microseconds
- Quadrics
- 3 microseconds
12Latency Comparison
13References
- Infiniband Trade Association - www.infinibandta.or
g - OpenIB Alliance - www.openib.org
- TopSpin - www.topspin.com
- Wikipedia - www.wikipedia.org
- OReilly - www.oreillynet.com
- Sourceforge - infiniband.sourceforge.net
- Performance Comparison of MPI Implementations
over InfiniBand, Myrinet and Quadrics. Computer
and Information Science. Ohio State University.
- nowlab.cis.ohio-state.edu/p
rojects/mpi-iba/publication/sc03.pdf