ABwE: Available Bandwidth Estimator Jiri Navratil R. Les. Cottrell Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, California 94025 jiri@slac.stanford.edu, cottrell@slac.stanford.edu - PowerPoint PPT Presentation

About This Presentation
Title:

ABwE: Available Bandwidth Estimator Jiri Navratil R. Les. Cottrell Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, California 94025 jiri@slac.stanford.edu, cottrell@slac.stanford.edu

Description:

Title: Prezentace aplikace PowerPoint Author: Ji Navr til Last modified by: cottrell Created Date: 1/26/2003 5:38:31 AM Document presentation format – PowerPoint PPT presentation

Number of Views:105
Avg rating:3.0/5.0
Slides: 44
Provided by: Jir113
Category:

less

Transcript and Presenter's Notes

Title: ABwE: Available Bandwidth Estimator Jiri Navratil R. Les. Cottrell Stanford Linear Accelerator Center (SLAC), 2575 Sand Hill Road, Menlo Park, California 94025 jiri@slac.stanford.edu, cottrell@slac.stanford.edu


1
ABwE Available Bandwidth EstimatorJiri Navratil
R. Les. Cottrell Stanford Linear Accelerator
Center (SLAC), 2575 Sand Hill Road, Menlo Park,
California 94025jiri_at_slac.stanford.edu,
cottrell_at_slac.stanford.edu

2
ABwE Available Bandwidth Estimator
  • Introduction (motivation, needs,..)
  • Basic principles
  • Path characteristics and the examples of packet
    pair dispersion delays
  • Bandwidth estimation
  • ABwE versus Iperf
  • Conclusions

3
Introduction 1
  • The HEP community is increasingly dependent on
    networking as internal cooperation grows (needs
    transfer huge amount of data between experimental
    sites as SLAC,CERN,etc. and home institutes
    spread over the world)
  • Our main task is to provide the physicists
    reliable access to the network (and the integral
    part of this activity is NETWORK MONITORING)
  • We have several monitoring system in operation
    (active as ping or iperf, and passive reading
    SNMP counts or using netflows data

4
Introduction 2
  • Network administrators and the users need to know
    RTT,losses, routing path, and estimations of
    available bandwidth to our partners
  • Currently we have such information in limited
    sampling periods.The big question is. Do we have
    valid information if we do measurements once per
    90 minutes and can we do measurements with tools
    as Iperf or to transfer the test files more
    frequently? Probably no.
  • We need a tool that could be used in continuous
    mode 24 hours a day 7 days a week which can
    quickly and non intrusively detect changes on
    multiple path

5
Specification
  • Tool based on dispersion techniques which doesnt
    pollute Internet (and overload an entry point to
    the Internet) with huge amount of testing packets
  • Get the result from one path during in a few
    seconds and produce results that could be easily
    preprocessed by graphical tools or enter to other
    systems (prediction, warning etc.)
  • Easily configurable and manageable from one site
  • We evaluated several tools using dispersion
    techniques but none of them in their current
    implementation met our demands.(Some of them were
    slow,some of them failed for high capacity paths
    and some of them were just technically too
    complicated).

6
Basic principles of ABwE
  • ABwE is based on the simplest way of probing
    (using only Packet pairs)
  • Evaluation is based on detailed technical
    analysis of how the packets pass via queuing
    devices
  • Complete path is cascade of queuing devices with
    different capacities
  • The separation of probing packets will happen
    even if there is no cross traffic
  • The final dispersion PP1 and PP2 is the results
    of superposition of many factors

7
How we measure the dispersion time
  • Using Netdyn package (package from University of
    Maryland 1991)
  • 20 packets pairs probes for each path
  • Probes are repeated with the period 20 msec and
    once a minute per each path. Set of 20 probes is
    called bunch. The bunch is evaluated as one
    statistical set of measurements.

Linux
Send PP1
Send PP2
t1
t2
dt_send (7-25 ms)
time
bunch
0
Lpp/C
NIC
8
ABwE Basic principles (The simple linear
situations)
PP2
PP1
time
r t t
Probe Receiver
Td-send
S
hop
R
Td-receive
Probe-Sender
T-stamp
PP1
Cross-traffic input
time
PP2
CT packets
PP2
PP1
Td-receive
T-stamp
T-stamp
PPD Td-receive - Td-send , ( Td-receive
gt Td-send )
REAL ratio between generally used long and short
packets
Dynamic Dispersion Delay
9
(No Transcript)
10
Detail Timing for PP on the way via
experimental path (stretching, compressing and
contracting)
PP2
PP1
time
move direction
Td-send
Probe-Sender
H1
H2
H3
H4
155 Mbps
622 Mbps
622 Mbps
1000 Mbps
1000 Mbps
S
R
hop
hop
hop
hop
Cross-traffic
Cross-traffic
Cross-traffic
Cross-traffic
Input-H1
PP1
PP2
OUT-H1
Input-H2
time
Output-H2 (stretching)
Input-H3
Output-H3 (contracting)
Td34
Td23
Input-H4
PP2
PP1
Td-receive
Free spaces for Cross-traffic
Td23 LPP/C23
Free spaces for cross-traffic
Td-receive Td34 Td23
Static Dispersion Delay
11
(Td)
12
NTT - Normalized Transfer Time
13
What type of traffic we can expect on the path
14
ABwE Narrow Band hop characteristics
H2
H3
155 Mbps
PP1
PP2
622 Mbps
hop
hop
Td12
Cross-traffic
Cross-traffic (622 Mbps lines)
Ex.1 Stretching absorbing Td effect
PP1
PP2
a) Different input Td12 same output Td23
Input-H2
Output H2 (stretching PP)
Input-H2
Absorbing CT
Output H2 (stretching PP)
Td23
b) Td not changed
Input-H2
Output H2 (stretching PP)
Td12
Td23
15
The principle of gradually narrowing bandwidth
Impact
No impact
1000
1000
622
622
622
155
Light beam
Light source
622
622
622
  • Remarks
  • Fully valid and easily applicable for continuous
    streams
  • or for data with strong source
  • not easily applicable for immediate situation on
    the path with
  • Poissons traffic or heavy bursty traffic with
    lights periods

16
ABwE Narrow Band hop characteristics
H2
H3
PP1
PP2
622 Mbps
155 Mbps
time
hop
hop
Td12
Cross-traffic
Cross-traffic (622 Mbps lines)
PP1
PP2
Input-H2
Output H2 (stretching PP)
Td23
(no cross-traffic from H2)
Input-H3
Ex.2 Multiplication effect
Output H3
Td23
CT packets from H2
Input H3
time
Output H3
PP1
PP2
Td3 nTd23
17
(No Transcript)
18
Multiplication factor in Td
19
Example of very low NB-Narrow Band
20
Example of superposition on Low NB
16 - 100 - 622 Mbits
726,6
Tdmin
551,0
- n64 Mbits
Tdmin
21
Traceroute Graph of Monitoring paths
According to the previous paragraphs an each path
has Static Dispersion Delay and
Dynamical Dispersion Delay given by the hops I/O
capacities in the tree caused by CT
22
Can we convert Td to bandwidth (Capacity) ?
What does it represent ?
We know that Td r (utilization
factor) If r grows than Td grows If linear than
C K 1/Td If non-linear We have
problem But we know that we deal with Bottleneck
band hop which can - eliminate previous Td -
replace them by own Td Lpacket/C - Queuing
start to play important role Use non linear
solution
23
The principle of gradually narrowing bandwidth
Impact
No impact
1000
1000
622
622
622
155
Light beam
Light source
622
622
622
24
The principle of gradually narrowing bandwidth
Impact
No impact
1000
1000
622
622
622
155
Light beam
Light source
622
622
622
We assume that Most of the time only one queue
dominates in the instant of our measurements !
25
The principle of gradually narrowing bandwidth
Impact
No impact
1000
1000
622
622
622
155
Light beam
Light source
622
622
622
We assume that Most of the time only one queue
dominates in the instant of our measurements !
26
All probing packets PP share same queue with
outside CrossTraffic (It means that
Td caused by Queuing is not dependent only
on the pkt_lengths of PP) Open question
What to use for estimation of CT (the average
pkt_length or pkt_length close to MTU) ?
27
From the Queuing theory for M/M/1 Tsojourn
(1E(N)) Tservice and in this we replace
Tsojourn Td , Tservice Lp/C and also use LPP
and LCT instead of Lp
Tdi LPP/Ci E(N) LCT/Ci Td
Tdinit Tdvar (1) Tdjinit mini (
(Tdij) 1ltilt20 ) QDFi (Tdij
Tdjinit)/NTTclass   this allows us to replace
E(N) in formula (1) by QDF   Tdij LPP /Ci
QDFj LCT/Ci  From this we can calculate Ci for
each singleton in one bunch j. Ci (LPP QDFj
LCT)/ Tdij (2) 
Tdjinit mini ( (Tdij) 1ltilt20 )
28
Graphical interpretation of the formula
Ci LPP /Tdi QDFi LCT/Tdi
Cmax Lpp/Tdmin (when no CT, QDF0)
C Mbits/s
Ci
Time s
29
(No Transcript)
30
(No Transcript)
31
EWMA Filtration characteristics (avgi (1 a)
yi a avgi-1 )
32
(No Transcript)
33
Monitoring sites (average values)
34
Iperf versus ABwE(few unclear points)
  • How to configure Iperf to achieve maximum
    performance in changing environment
  • ( difference 10 - 100 )
  • Limitation on the Entry-points to the Internet
    (SLAC 622Mbits, customer load (10 - 40 )
  • Machine performance (400-550 Mbits)
  • Iperf aggressiveness (it suppress bandwidth of
    other running applications) and reports all what
    Iperf transferred
  • Synchronization problem to avoid dependency

35
ABwE compare with Iperf
36
ABwE compare with Iperf
37
ABwE compare with Iperf
38
(No Transcript)
39
ABwE compare with Iperf
40
ABwE compare with Iperf
41
(No Transcript)
42
(No Transcript)
43
Conclusions
  • We have demonstrated several network analysis and
  • a new method for monitoring ABw and bottleneck
    capacity in the range several Mbits to 1000 Mbits
  • ABwE is a non intrusive method which can be run
    in
  • a continuous mode 24 hours a day 7 days a
    week
  • It can detect changes in the path capacity based
    on heavy traffic and also discover dramatic
    changes in routing. The usefulness of ABwE has
    been proven several times since last summer
  • Unfortunately, ABwE still doesnt exists as a
    publicly used tool
Write a Comment
User Comments (0)
About PowerShow.com