TransPAC QBSS - PowerPoint PPT Presentation

1 / 17
About This Presentation
Title:

TransPAC QBSS

Description:

Scavenger Availability. Juniper Implementation. Juniper ... Scavenger on TransPAC ... Where is Scavenger Available? TransPAC circuit from Tokyo to Chicago ... – PowerPoint PPT presentation

Number of Views:23
Avg rating:3.0/5.0
Slides: 18
Provided by: chris248
Learn more at: http://www.internet2.edu
Category:

less

Transcript and Presenter's Notes

Title: TransPAC QBSS


1
TransPAC QBSSScavenger Service
  • Chris Robb
  • Indiana University
  • TransPAC Network Engineer
  • chrobb_at_iu.edu

2
Topics
  • QBSS on TransPAC
  • Scavenger Availability
  • Juniper Implementation
  • Juniper Testing
  • Learnings
  • Future Directions
  • Questions and Comments

3
Scavenger on TransPAC
  • The need to implement QBSS on the TransPAC
    circuit came about after discussions with a group
    of researchers from Indiana University and the
    University of Tokyo.
  • A GRAPE-6 machine at Tokyo will be generating
    terabytes of data
  • Indiana University is supplying an HPSS archive
    to store the data for analysis by other US
    researches.
  • This data will need to be sent across the
    TransPAC OC-3 in a fashion that wouldnt cause
    any significant network impact
  • Researchers agreed to tag their IP datagrams as
    QBSS so they wouldnt interrupt the other
    Tokyo-Chicago traffic.
  • This project is slated to begin transfers very
    soon.

4
Where is Scavenger Available?
  • TransPAC circuit from Tokyo to Chicago
  • We will focus on this link because it focuses on
    a Juniper implementation

5
Juniper Implementation
  • Both sides of the TransPAC link were serviced by
    Juniper routers, so a Juniper implementation of
    Scavenger Service needed to be created and
    tested.
  • Juniper proposed an implementation and the Global
    NOC volunteered to test its functionality.

6
Juniper Test Setup
  • Juniper donated an M5 for general Global NOC
    testing in June of 2001
  • The test setup was as follows
  • Both workstations were Linux-based. JunOS 4.3 was
    loaded on the M5

7
Juniper Test Setup (cont.)
  • Juniper doesnt yet have DSCP support, so TOS
    bits were used instead. The I2 QBSS
    implementation accounts for this and Juniper has
    said that DSCP support is forthcoming.
  • A precedence map was created to place the tagged
    packets into their appropriate queues. This
    remained fairly constant throughout the testing
  • precedence-map QBSS-test
  • bits 000 output-queue 0 lt- Best Effort
    Traffic
  • bits 001 output-queue 1 lt- QBSS tagged
    traffic
  • bits 101 output-queue 0
  • bits 010 output-queue 0
  • bits 011 output-queue 2 lt- Premium
    traffic
  • bits 100 output-queue 0
  • bits 110 output-queue 3 lt- Network
    Management Traffic. Juniper advises
  • bits 111 output-queue 3 lt- not to
    change the queue assignment for these

8
Juniper Test Setup (cont.)
  • After the traffic is placed in the queue, a WRR
    profile is created for the individual interfaces.
  • The following WRR profile was created to service
    the outgoing ATM interface
  • weighted-round-robin
  • output-queue 0 weight 98 lt-
    best-effort traffic
  • output-queue 1 weight 1 lt- QBSS
    gets a minimum of 1 of the circuit
  • output-queue 2 weight 0 lt- Premium
    service receives no servicing
  • output-queue 3 weight 1 lt- Network
    Management traffic must have some

  • servicing
  • This configuration would change, depending on
    what we wanted to test. In the above
    configuration, we werent testing premium service

9
Juniper Test Setup (cont.)
  • A patched version of iPerf that allowed for TOS
    tagging was used to generate multiple streams
    from the GigE NIC towards the ATM NIC.
  • Most of the tests involved running 200Mbit UDP
    streams of one type, and then introducing another
    stream of the other type
  • The results were logged by iPerf at one second
    intervals, providing a picture of overall traffic
    behavior during the particular test.

10
Test results
  • The overall results were positive!
  • The following graph shows the results of one
    particular test
  • At 0s, a 200Mb QBSS stream was run. At 10s, a
    best-effort stream was introduced, pushing down
    the QBSS stream to about 1 percent of the
    available bandwidth. At 40s, the BE stream was
    removed and the QBSS stream reclaimed the pipe.

11
Test Results (cont.)
  • Heres another graph showing a 200Mb QBSS stream
    before and after the introduction of a 10Mbit
    best-effort stream
  • The full results of these tests can be viewed at
  • http//www.transpac.org/qbss-html

12
Learnings
  • Juniper will only use the WRR queuing profile
    when a circuit is saturated.
  • Because you cannot configure WRR queuing on a
    per-VC basis, this has wide implications for
    small VCs.
  • For example, a policed 10Mbit VC will never fill
    an entire OC-3 ATM PIC.
  • QBSS traffic on that VC will not be serviced any
    differently than best-effort traffic unless the
    circuit is filled up with traffic from other
    configured VCs
  • One way to get around this is to generate traffic
    on the other VCs to artificially fill up the
    circuit, although this is highly undesirable.

13
Learnings (cont.)
  • Fortunately, Juniper does have a way around this!
  • There is a hidden command in the JunOS that
    will force all traffic on an interface,
    regardless of the link saturation, to be subject
    to the WRR profile
  • set chassis fpc ltxgt pic ltygt transmit-buffers ltngt
  • It is also important to set queue lengths for the
    individual VCs on the circuit to make sure that a
    small VC doesnt eat up all the buffer space when
    delaying QBSS traffic.

14
Learnings (cont.)
  • Expedited traffic must be placed in queue 0
  • When swapping the best effort and QBSS queues
    around (i.e. QBSS in queue 0 and best-effort in
    queue 1), QBSS traffic did not yield to
    best-effort traffic
  • Instead, they evenly split the circuit, with both
    flows getting half of the bandwidth
  • We spoke at length with Juniper about this
    problem. Unfortunately, we had to return the M5
    before this was resolved.
  • It isnt crippling, but something to keep in mind

15
TransPAC Implementation
  • Thanks to the APAN NOC, TransPAC is now enabled
    for QBSS transmissions, using the WRR profile
    shown earlier.
  • Initial tests have proven positive, but more
    testing is needed before the GRAPE transfers are
    initiated.

16
Future Directions
  • Results must be shared with QoS working group
    QBSS mailing list
  • More complete testing of the queue 0 problem is
    needed
  • We also need to test TCP performance
  • IU has recently reacquired the hardware to
    complete such tests. Results will be shared.

17
Questions? Comments?
Write a Comment
User Comments (0)
About PowerShow.com