Jir - PowerPoint PPT Presentation

1 / 50
About This Presentation
Title:

Jir

Description:

Introduction Ji Navr til SLAC (From Papers to Practice) MWFS, TOMO, TOPO First results What has been done Phase 1 - Remodeling - Code separation (BW and CT ... – PowerPoint PPT presentation

Number of Views:146
Avg rating:3.0/5.0
Slides: 51
Provided by: fcs67
Category:
Tags: jir | snmp

less

Transcript and Presenter's Notes

Title: Jir


1
Introduction
INCITE
  • Jirí Navrátil
  • SLAC

2
Project Partners and Researchers
INCITE Edge-based Traffic Processing and
Service Inference for High-Performance Networks
Richard Baraniuk, Rice University Les Cottrell,
SLAC Wu-chun Feng, LANL
Rice University Richard Baraniuk, Edward
Knightly, Robert Nowak, Rudolf Riedi Xin Wang,
Yolanda Tsang, Shriram Sarvotham, Vinay Ribeiro
Los Alamos National Lab (LANL) Wu-chun Feng,
Mark Gardner, Eric Weigle Stanford Linear
Accelerator Center (SLAC) Les Cottrell, Warren
Matthews, Jiri Navratil
3
Project Goals
INCITE Edge-based Traffic Processing and
Service Inference for High-Performance Networks
Richard Baraniuk, Rice University Les Cottrell,
SLAC Wu-chun Feng, LANL
  • Objectives
  • scalable, edge-based tools for on-line network
    analysis, modeling, and measurement
  • Based on
  • advanced mathematical theory and methods
  • Designeted for
  • support high-performance computing
    infrastructures, such as computational grids,
  • ESNET, Internet2 and other HPNetworking project

4
Project Elements
INCITE Edge-based Traffic Processing and
Service Inference for High-Performance Networks
Richard Baraniuk, Rice University Les Cottrell,
SLAC Wu-chun Feng, LANL
  • Advanced techniques
  • from networking, supercomputing, statistical
    signal processing, applied mathematics
  • Multiscale analysis and modeling
  • understand causes of burstiness in network
    traffic
  • realistic, yet analytically tractable,
    statistically robust, and computationally
    efficient modeling
  • On-line inference algorithms
  • characterize and map network performance as a
    function of space, time, application, and
    protocol
  • Data collection tools and validation experiments

5
Scheduled Accomplishments
INCITE Edge-based Traffic Processing and
Service Inference for High-Performance Networks
Richard Baraniuk, Rice University Les Cottrell,
SLAC Wu-chun Feng, LANL
  • Multiscale traffic models and analysis techniques
  • based on multifractals, cascades, wavelets
  • study how large flows interact and cause bursts
  • study adverse modulation of application-level
    traffic by TCP/IP
  • Inference algorithms for paths, links, and
    routers
  • multiscale end-to-end path modeling and probing
  • network tomography (active and passive)
  • Data collection tools
  • add multiscale path, link inference to PingER
    suite
  • integrate into ESnet NIMI infrastructure
  • MAGNeT Monitor for Application-Generated
    Network Traffic
  • TICKET Traffic Information-Collecting Kernel
    with Exact Timing

6
Future Research Plans
INCITE Edge-based Traffic Processing and
Service Inference for High-Performance Networks
Richard Baraniuk, Rice University Les Cottrell,
SLAC Wu-chun Feng, LANL
  • New, high-performance traffic models
  • guide RD of next-generation protocols
  • Application-generated network traffic repository
  • enable grid and network researchers to test and
    evaluate new protocols with actual traffic
    demands of applications rather than modulated
    demands
  • Multiclass service inference
  • enable network clients to assess a system's
    multi-class mechanisms and parameters using only
    passive, external observations
  • Predictable QoS via end-point control
  • ensure minimum QoS levels to traffic flows
  • exploit path and link inferences in real-time
    end-point admission control

7
Surveyor
NIMI
Pinger
RIPE
There is no vacuum
Optivity
CiscoWorks
Spectrum
HpOpenview
8
(No Transcript)
9
(No Transcript)
10
(No Transcript)
11
(No Transcript)
12
JNFLOW
Cisco-Netflows
13
(From Papers to Practice)
FPP phase
  • MWFS, TOMO, TOPO

14
(No Transcript)
15
(No Transcript)
16
(No Transcript)
17
(No Transcript)
18
(No Transcript)
19
20 ms
300 ms
40 T for new set of values (12 sec)
20
(No Transcript)
21
First results
BWe 9,875 Mbps for 10 Mbps Ethernet
CT-Graph
22
(No Transcript)
23
(No Transcript)
24
What has been done
  • Phase 1 - Remodeling
  • - Code separation (BW and CT)
  • - Find how to call MATLAB from another program
  • - Analyze Results and data
  • - Find optimal params for model
  • Phase 2
  • - Webing of BW estimate

25
Data Dispersions from sunstats.cern.ch
26
ccnsn07.in2p3.fr
sunstats.cern.ch
pcgiga.cern.ch
plato.cacr.caltech.edu
27
pcgiga.cern.ch default WS BW 70Mbps
pcgiga.cern.ch WS 512K BW 100 Mbps
28
Reaction to the network problems
29
(No Transcript)
30
Problems ??? ? ? ?
network
software
licence
31
After tuning
More optimistics results
32
(No Transcript)
33
(No Transcript)
34
(No Transcript)
35
MF-CT Features and benefits
  • No need access to routers !
  • Current monitoring systems for Load of traffic
    are based on SNMP or Flows (needs access to
    routers)
  • Low cost
  • Allows permanent monitoring (20 pkts/sec
    overhead 10 Kbytes/sec)
  • Can be used as data provider for ABW prediction
    (ABWBW-CT)
  • Weak point for common use
  • MATLAB code

36
Future work on CT
  • Verification model
  • Define and setup verification model (SR)
  • Measurements (S)
  • Analyze results (SR)
  • On-line running on selected sites
  • Prepare code for automation and Webing (S)
  • CT-Code modificaton ? (R)

37
MF-CT verification model
UDP echo
SNMP counter
CERN
SNMP counter
internet
SNMP counter
IN2P3
SLAC
SNMP counter
MF-CT Simulator
UDP echo
38
(No Transcript)
39
(No Transcript)
40
CT RE-ENGINEERING
  • For practical monitoring would be necessary to do
    modification for using it in different modes
  • Continuos mode for monitoring one site in Large
    time scale (hours)
  • Accumulation mode (1 min, 5 min, ?) for running
    for more sites in parallel
  • ? Solution without MATLAB ?

41
2 NEW 2Ls
coming soon
42
Rob Nowak (and CAIDA people) say
www.caida.org
This is internet
43
Network Topology Identification
Ratnasamy McCanne (99) Duffield, et al
(00,01,02)
Bestavros, et al (01) Coates, et al (01)
Pairwise delay measurements reveal topology
44
Network Tomography
source
router / node
link
receivers
Measure end-to-end (from source to
receiver) losses/delays Infer link-level (at
internal routers) loss rates and delay
distributions
45
Unicast Network Tomography
Measure end-to-end losses of packets
Cannot isolate where losses occur !
46
Packet Pair Measurements
cross-traffic
delay
measurement packet pair
47
Delay Estimation
Measure end-to-end delays of packet-pairs
48
Packet-pair measurements
  • Key Assumptions
  • fixed routes
  • iid pair-measurements
  • losses delays on
  • each link are mutually
  • independent
  • packet-pair losses
  • delays on shared links
  • are nearly identical

49
ns Simulation
  • 40-byte packet-pair probes every 50 ms
  • competing traffic comprised of
  • on-off exponential (500 byte packets)
  • TCP connections (1000 byte packets)

cross-traffic link 9
Kbytes/s
time (s)
Test network showing link bandwidths (Mb/s)
50
Future work on TM and TP
  • Model in frame of Internet (100 sites)
  • Define verification model (SR)
  • Deploy and install code on sites (S)
  • First measurements (SR)
  • Analyze results (form,speed,quantity) (SR)
  • ? Code modificaton (R)
  • Production model ?
  • Compete with Pinger, RIPE, Surveyor, Nimi ?
  • How to unify VIRTUAL structure with Real
Write a Comment
User Comments (0)
About PowerShow.com