Overview - PowerPoint PPT Presentation

1 / 49
About This Presentation
Title:

Overview

Description:

A precedence map was created to place the tagged packets into their appropriate queues. ... precedence-map QBSS-test { bits 000 output-queue 0; - Best Effort Traffic ... – PowerPoint PPT presentation

Number of Views:45
Avg rating:3.0/5.0
Slides: 50
Provided by: tran75
Category:

less

Transcript and Presenter's Notes

Title: Overview


1
(No Transcript)
2
Overview
  • Introduction to TransPAC
  • Chronological list of events for 2001
  • Link status
  • TransPAC measurement efforts
  • TransPAC concerns
  • Introduction to Global Research NOC

3
TransPAC events for 2001
  • Chris Robb was hired as the TransPAC Network
    Engineer.
  • email chrobb_at_indiana.edu
  • Chris is deeply involved in application
    monitoring and analysis.
  • NLANR/Joint Techs meeting in Hawaii January 2001,
    jointly sponsored by TransPAC
  • APAN Hawaii meeting

4
TransPAC Introduction
  • TransPAC is part of the HPIIS program jointly
    funded by the NSF and the JST.
  • Provide a high-performance network between the
    Asia-Pacific region and the US.
  • Encourage collaborative science between AP and
    US.
  • Set to terminate October 2003.
  • Webpage - http//www.transpac.org

5
TransPAC events for 2001
  • TransPAC meeting in Tokyo in conjunction with
    IWS2001 in February 2001
  • TransPAC newsletter debuted May 2001
  • TransPAC activity with the GRAPE group
    (Univ.-Tokyo) and QBSS
  • More about GRAPE and QBSS later
  • The TransPAC Annual Report was submitted to the
    NSF
  • Copies available soon

6
Coming TransPAC events for 2001
  • TransPAC/AIST(Japan) will sponsor a mini-Grid
    workshop in Tokyo in October 2001.
  • The workshop is co-organized by the GGF.
  • TransPAC link upgrade
  • More about link upgrade later

7
TransPAC QBSS"Scavenger Service
  • What is Scavenger Service (QBSS)?
  • Why use Scavenger Service?
  • QBSS on TransPAC
  • Scavenger Availability
  • Juniper Implementation
  • Juniper Testing
  • What we Learned
  • Future QBSS Directions

8
What is Scavenger Service?
  • Less than best-effort service
  • The opposite of premium service
  • Traffic tagged with the Scavenger Diffserve
    Codepoint (DSCP) get the lowest priority delivery
    when a circuit is full
  • Always receives 1 of the WRR queue servicing
  • QBSS traffic can use up the entire circuit when
    best effort traffic is not present

9
Why use Scavenger Service?
  • Allows researchers and individuals to be good
    network citizens with little effort
  • Large transfers no longer need to be scheduled
    around the peak usage periods
  • Rumors of universities tagging all dorm traffic
    with Scavenger marking

10
Scavenger Service on TransPAC
  • The need to implement QBSS on the TransPAC
    circuit came about as a result of collaborations
    with a group of researchers from Indiana
    University and the GRAPE group at the University
    of Tokyo.
  • The GRAPE is a special-purpose computer for
    performing N-body gravitational computations
    which are critical to the understanding of such
    large scale phenomena as the formation and
    evolution of star clusters.

Jun Makino with GRAPE-6
11
Scavenger Service on TransPAC (cont.)
  • Indiana University will provide a HPSS to archive
    the data for analysis by researches.
  • This data needs to be sent across the TransPAC
    OC-3 in a fashion that will not cause any
    significant network impact.
  • A locally developed software system (Proxy
    system) will tag traffic and transfer data to the
    HPSS using libHPSS.
  • This project is slated to begin transfers very
    soon.

12
Where is Scavenger Available?
  • Abilene backbone
  • Cisco implementation using Weighted Round Robin
    queue servicing
  • TransPAC circuit
  • TransPAC focuses on a Juniper implementation
  • Any institution can implement Scavenger service
    on their network

13
Juniper Implementation
  • Both sides of the TransPAC link use Juniper
    routers
  • A Juniper implementation of Scavenger Service
    needed to be created and tested
  • Juniper proposed an implementation and the Global
    NOC was charged with verifying its functionality

14
Juniper Test Setup
  • Juniper donated an M5 for general Global NOC
    testing in June of 2001
  • The test setup was as follows
  • Both workstations were Linux-based. JunOS 4.3 was
    used on the M5

15
Juniper Test Setup (cont.)
  • Juniper does not yet have DSCP support, so TOS
    bits were used instead. The I2 QBSS
    implementation accounts for this and Juniper has
    said that DSCP support is forthcoming.
  • A precedence map was created to place the tagged
    packets into their appropriate queues. This
    remained fairly constant throughout the testing
  • precedence-map QBSS-test
  • bits 000 output-queue 0 lt- Best Effort
    Traffic
  • bits 001 output-queue 1 lt- QBSS tagged
    traffic
  • bits 101 output-queue 0
  • bits 010 output-queue 0
  • bits 011 output-queue 2 lt- Premium
    traffic
  • bits 100 output-queue 0
  • bits 110 output-queue 3 lt- Network
    Management Traffic. Juniper advises
  • bits 111 output-queue 3 lt- not to
    change the queue assignment for these

16
Juniper Test Setup (cont.)
  • After the traffic is placed in the queue, a WRR
    profile is created for the individual interfaces.
  • The following WRR profile was created to service
    the outgoing ATM interface
  • weighted-round-robin
  • output-queue 0 weight 98 lt-
    best-effort traffic
  • output-queue 1 weight 1 lt- QBSS
    gets a minimum of 1 of the circuit
  • output-queue 2 weight 0 lt- Premium
    service receives no servicing
  • output-queue 3 weight 1 lt- Network
    Management traffic must have some

  • servicing
  • This configuration would change, depending on
    what we wanted to test. In the above
    configuration, we werent testing premium service

17
Juniper Test Setup (cont.)
  • A patched version of iPerf that allowed for TOS
    tagging was used to generate multiple streams
    from the GigE NIC towards the ATM NIC.
  • Most of the tests involved running 200Mbit UDP
    streams of one type, and then introducing another
    stream of the other type
  • The results were logged by iPerf at one second
    intervals, providing a picture of overall traffic
    behavior during the particular test.

18
Test Results
  • The overall results were positive.
  • The following graph shows the results of one
    particular test
  • At 0s, a 200Mb QBSS stream was run. At 10s, a
    best-effort stream was introduced, pushing down
    the QBSS stream to about 1 percent of the
    available bandwidth. At 40s, the BE stream was
    removed and the QBSS stream reclaimed the pipe.

19
Test Results (Cont.)
  • Here is another graph showing a 200Mb QBSS stream
    before and after the introduction of a 10Mbit
    best-effort stream
  • The following graph shows the results of one
    particular test
  • The full results of these tests can be viewed at
  • http//www.transpac.org/qbss-html

20
What we learned
  • Juniper will only use the WRR queuing profile
    when the circuit is saturated.
  • Because you cannot configure WRR queuing on a
    per-VC basis, this has wide implications for
    small VCs.
  • For example, a policed 10Mbit VC will never fill
    an entire OC3-ATM PIC.
  • QBSS traffic on that VC will not be serviced
    differently than best-effort traffic unless the
    entire circuit is full.
  • One ugly way around this is to inject artificial
    traffic.

21
What we learned (Cont.)
  • Fortunately, Juniper does have a way around this.
  • There is a hidden command in the JunOS that will
    force all traffic on an interface, regardless of
    the link saturation, to be subject to the WRR
    profile
  • set chassis fpc ltxgt pic ltygt transmit-buffers ltngt
  • It is also important to set queue lengths for the
    individual VCs on the circuit to make sure that a
    small VC does not eat up all the buffer space
    when delaying QBSS traffic.

22
What we learned (Cont.)
  • Expedited traffic must be placed in queue 0
  • When swapping the best effort and QBSS queues
    around (i.e. QBSS in queue 0 and best-effort in
    queue 1), QBSS traffic did not yield to
    best-effort traffic
  • Instead, they evenly split the circuit, with both
    flows getting half of the bandwidth
  • We spoke at length with Juniper about this
    problem. Unfortunately, we had to return the M5
    before this was resolved.

23
TransPAC Implementation
  • Thanks to the APAN NOC, TransPAC is now enabled
    for QBSS transmissions, using the WRR profile
    shown earlier.
  • Initial tests have proven positive, but more
    testing is needed before the GRAPE transfers are
    initiated.

24
Furture QBSS Directions
  • In order to more fully test the queue 0 problem,
    we have asked Juniper for another test router
  • They have promised to get us an evaluation M20
    for further testing. The results from the new
    batch of tests will be posted on the web page.

25
TransPac Link Status
  • The TransPAC network runs from the Tokyo XP to
    the STAR TAP in Chicago.
  • TransPAC began as a 35Mbps connection in June
    1998. Link upgraded 4 times.
  • TransPAC was upgraded from 100Mpbs to 155Mbps
    (OC3) in November 2000.

26
(No Transcript)
27
TransPac Link Status (Cont.)
  • This is not an official announcement
  • Current negotiations with vendors promise more
    bandwidth for less money.
  • POS (northern route) and ATM (southern route)
  • TransPAC will continue to have a presence in
    Chicago (Star Light)
  • TransPAC will most likely have a presence in
    Hawaii
  • TransPAC will most likely have a presence on the
    west coast (Seattle)

28
(No Transcript)
29
TransPAC Measurement
  • Classification
  • Metrics
  • TransPAC tools
  • OCxmon
  • Data archive Measurements
  • TransPAC tools
  • OCxmon
  • Data archive

30
Classification
  • Advanced Network Services
  • Bioinformatics/Biology
  • Computer, Information Science
  • Education
  • Engineering
  • Geosciences
  • Math, Physical Sciences
  • Polar Research
  • Social Behavioral Sciences

31
Metrics
  • Bandwidth of the physical/network link.
  • Identify and profile individual network
    applications.
  • Histogram of bandwidth utilization per
    application.
  • Histogram of latency/jitter per application.
  • Histogram of queue size of a router.
  • Verify QoS.
  • Realtime application monitoring.

32
TransPAC Tools
  • General purpose Linux box, sleuth.transpac.org
  • Current tools
  • Traceroute from STAR TAP
  • Traceroute from Tokyo XP
  • Reverse traceroute
  • Pinger node
  • MRTG
  • FlowScan
  • NetFlow tools
  • TransPAC Weather Map
  • OC3mon

33
TransPAC Tools (cont.)
  • Future measurement activities include
  • Mirror monitoring over northern and southern
    routes.
  • Realtime application monitoring (OCXmon).
  • Juniper SNMP bins
  • Making locally developed software available.

34
MRTG
  • The solid green area represents traffic coming
    from APAN
  • The blue line represents traffic headed toward
    APAN
  • The figure below is the daily graph for June 20,
    2001.
  • The figure below is the yearly graph for May 00
    to June 01.

35
OC3mon
  • Installed OC3mon on TransPAC
  • OC3mon is located at STAR TAP
  • Data is being collected locally
  • Currently working with DAG tools
  • Developing realtime application monitor
  • System in place to archive data to HPSS using
    proxy system

36
FlowScan
  • The figure below shows aggregated traffic
    breakdown by protocol on the Tokyo TransPAC
    router.
  • The figure below shows aggregated traffic
    breakdown by service on the TokyoTransPAC router.

37
NetFlow tools
  • Suite of NetFlow based tools
  • Juniper only provides sampled NetFlow data
  • Define flows using OCXmon trace data
  • Implement above tools suite using trace data

38
TransPAC Weather Map
39
Data Archive
  • We need help to determine archiving policies.
  • Should raw data be archived?
  • How long to keep data?
  • Who do we turn too?
  • Would like to hear war stories.

40
TransPAC concerns
  • Equity between the northern and southern routes.
  • Trace data archival.
  • Put tools is the hands of the users.
  • Develop tools that anyone can use.
  • Legally distribute locally developed software.
  • Grant expires October 2003.

41
Global Research NOC
  • Global NOC Overview
  • Global NOC Plans
  • Footprints

42
Global NOC Overview
  • 7 x 24 x 365 Operations
  • Trouble ticket reporting
  • 15 staff
  • Manager 4 Leads 7 operators 3 hourly
  • Dedicated TransPAC and Global support
  • Also supported by Abilene and IU infrastructure
  • (facilities, engineering, Web page development,
    administration)

43
Global NOC Networks
  • STAR TAP/Euro-Link/TransPAC Networks
  • Abilene (Internet2)
  • MIRnet (to Russia)
  • AMPATH (to South America)
  • IU campus networks

44
New Trouble Ticket System
  • In June 2001, the NOC rolled out a new trouble
    ticket system called Footprints. This system
    provides advanced functionality for
  • Tracking trouble tickets with mandatory next
    action descriptions and time stamp.
  • Automatic escalation of tickets based on defined
    criticalities and time stamps. Automated
    notification via email and paging system is
    incorporated.
  • Advanced weekly network availability reports that
    include breakdown on outage types, along with
    trend analysis, etc.

45
Footprints
  • Trouble Ticket summary updates on the TransPAC
    NOC web page are now available. This is for the
    general public to view. The report will be
    updated twice an hour. The information provided
    is
  • ticket title and number
  • date created
  • ticket priority
  • status and type
  • short summary of what the ticket is about
  • URL - http//noc.transpac.org/noc.html

46
Footprints (Cont.)
  • The TransPAC NOC will open it's trouble ticket
    system up to the APAN NOC by sharing a joint
    project in Footprints. This will allow both
    NOCs to share trouble tickets, thereby
    increasing the level of communication between
    both parties.
  • Access to selected information in the system via
    a "Group User" interface for those in TransPAC
    network administration and the NSF. This will be
    done through group user accounts into the system.
  • Both of these developments will be complete by
    early Fall, 2001.

47
Footprints (Cont.)
  • The benefits of the new Footprints trouble ticket
    system will be an improvement in user
    functionality and automation
  • Considerable network and trouble ticket
    information is now available to the APAN NOC,
    TransPAC administration, and the NSF

48
Global NOC Plans
  • Continue developing Trouble Ticket system
  • Direct contacts for longer outages
  • More emphasis on performance monitoring
  • STAR TAP weather map

49
Questions and Comments
  • John Hicks
  • jhicks_at_iu.edu
Write a Comment
User Comments (0)
About PowerShow.com