Performance Evaluation of 10Gigabit Ethernet - PowerPoint PPT Presentation

1 / 16
About This Presentation
Title:

Performance Evaluation of 10Gigabit Ethernet

Description:

Our work is focused to find an optimal configuration of hardware/software/kernel ... Use a different switch: Extreme Black Diamond. ... – PowerPoint PPT presentation

Number of Views:50
Avg rating:3.0/5.0
Slides: 17
Provided by: Mar5332
Category:

less

Transcript and Presenter's Notes

Title: Performance Evaluation of 10Gigabit Ethernet


1
Performance Evaluation of 10-Gigabit Ethernet
  • Preliminary test results
  • Nov 18 2005

2
Outline
  • Objectives
  • Hardware Specification
  • TCP tuning at application and kernel level
  • Performance results
  • Future work

3
Objectives
  • Our work is focused to find an optimal
    configuration of hardware/software/kernel level
    to obtain the maximum throughput in a simple
    badtest configuration.

4
Hardware specificationSUN Fire V20z Server
  • Processors 2 single-core AMD Opteron 252 (2,6
    GHz)
  • L2 Cache per Processor 1 MB
  • Memory 4 GB (4 1-GB DIMMS)
  • Two 64-bit PCI-X slots One full-length at 133
    MHz One half-length at 66 MHz
  • Operating System Scientific Linux Kernel 2.4.21

5
Hardware SpecificationIntel Pro 10GE Server
Adapter
  • Controller MAC PCI-X 10GE
  • Intel 82597EX a 133 MHz/64-bit
  • 16 KByte maximum packet size (Jumbo Frame)
  • Conformity to PCI-X 1.0a and PCI 2.3

6
Hardware specificationExtreme Summit 400-48T
  • 48 ports 10/100/1000BASE-T
  • 2 ports 10 Gigabit stacking (stacking SW feature
    planned)
  • 9216 Byte maximum packet size (Jumbo Frame)
  • 160 Gbps switch fabric bandwidth

7
Testbed Configuration
10 Gb/s mtu 9216
10 Gb/s mtu 9216
8
TCP Tuning Kernel level
  • /proc/sys/net/core/rmem_default 4194303
  • /proc/sys/net/core/rmem_max 16777215
  • /proc/sys/net/core/wmem_default 4194303
  • /proc/sys/net/core/wmem_max 16777215
  • /proc/sys/net/core/netdev_max_backlog 100000
  • /proc/sys/net/core/optmem_max 4194303   
  • /ipv4/tcp_rmem tcp read buffer min / default /
    max 1048576 16777216
    33554432
  • /ipv4/tcp_wmem tcp write buffer min / default /
    max 1048576 16777216 33554432
  • /ipv4/tcp_mem tcp buffer space min / pressure /
    max 1048576 16777216 33554432
  • /proc/sys/net/ipv4/tcp_timestamps1
  • /proc/sys/net/ipv4/tcp_sack 0
  • /proc/sys/net/ipv4/tcp_tw_recycle0
  • /proc/sys/net/ipv4/tcp_tw_reuse0

9
Tuning of hardware configuration
  • mmbrc (max memory byte read count).
    The mmrbc forms part of the PCI-X Command
    Register and sets the maximim byte count the
    PCI-X device may use when initating a Sequence
    with one of the burst read commands. ( Value
    range 512-4096 Byte)
  • txqueuelen (Transmition queue length).
    The TXQueueLen determines the maximum
    size of packets that can be buffered on the
    egress queue of a linux net interface. ( Utlized
    value 100000 Byte)
  • Max backlog It is similar in function to the
    txqueuelen variable on the sender. (Utlized value
    100000 Byte)

10
TCP Tuning Application level
  • The Iperf tool was used to tansmit stream of
    UDP/TCP packet
  • Iperf_len (-l) the length of buffers to read or
    write.
    Default is 8 KB for TCP, 1470
    bytes for UDP.
  • MTU (Max Transfer Unit) defines the largest size
    of packets that an interface can transmit without
    the need to fragment.
    (MAX possible
    value 9216 Byte).
  • Window Size
  • It is always utilized at MAX possible value
    32MByte.

11
TCP Throughput as a function of iperf_len (write
buffer) and mmbrc
  • Test iperf 1 flow TCP
  • -l lt 12000 Byte ? Bootleneck is CPU
  • -l gt 12000 Byte ? Bootleneck is MTU of switch
    (9216 Byte)
  • mmbrch, iperf_lenh ? throughputh

12
Number of software interrupt as a function of
iperf_len and mmbrc
  • Test iperf 1 flow TCP
  • Iperf_len i ? Software interrupt reciverh

13
UDP Throughput as a function of iperf_len and
mmbrc
  • Test iperf 1 flow UDP
  • -l lt 12000 Byte ? Bootleneck is CPU
  • -l gt 12000 Byte ? Bootleneck is MTU of switch
    (9216 Byte)
  • mmbrch, iperf_lenh ? throughputh

14
MAX Throughput TCP/UDP as a function of mmbrc
  • Iperf_len 20000 Byte
  • The TCP throughput plot climbs smoothly from 3,4
    Gb/s to 6 Gb/s
  • The UDP throughput plot climbs smoothly from 3,7
    Gb/s to 6,2 Gb/s

15
CPU Activity
16
Future work
  • Change testbed configuration
  • Test another model of 10-Gigabit Ethernet
    interface Chelsio N210 10GbE Server Adapter.
  • Use a different switch Extreme Black Diamond.
Write a Comment
User Comments (0)
About PowerShow.com