The Narses Applicationlevel Protocol Simulator - PowerPoint PPT Presentation

1 / 15
About This Presentation
Title:

The Narses Applicationlevel Protocol Simulator

Description:

Mary Baker. TJ Giuli. Computer Science Department. Stanford University ... Avg. round-trip path latency in Narses 88ms, max. 158 ms ... – PowerPoint PPT presentation

Number of Views:18
Avg rating:3.0/5.0
Slides: 16
Provided by: lionC
Category:

less

Transcript and Presenter's Notes

Title: The Narses Applicationlevel Protocol Simulator


1
The Narses Application-level Protocol Simulator
  • Mary Baker
  • TJ Giuli
  • Computer Science Department
  • Stanford University
  • http//mosquitonet.stanford.edu

2
Application protocol simulation - background
  • Many large distributed applications
  • Peer-to-peer applications
  • Web caches
  • Content distribution networks
  • New fault-tolerant applications
  • Want to understand application-level protocol
    behavior
  • Short-term behavior
  • Long-term behavior
  • Simulate large systems quickly
  • Many nodes
  • Many flows
  • Over long periods of time

3
Problem
  • Discrete-event packet simulators not fast enough
  • ns, SSFNet, tcpsim, Flowsim, etc.
  • Not intended for large application-level
    simulations
  • Detailed models of network layers for accurate
    timings
  • Must handle many events
  • Per-packet events (or perhaps packet trains)
  • Events at network routers
  • Memory consumption is an issue
  • ns2 multicast simulations 8194 nodes gt 671MB
  • SSFNET 33,300 hosts and routers gt over 2GB

4
What do we need to do?
  • Reduce number of simulation events
  • Simulate at larger granularity than the packet
  • Avoid simulation of network internals (routers,
    etc.)
  • Reduce complexity
  • Avoid simulation of lower layers of the network
  • Support creating prototype distributed
    applications
  • Simple interface for applications to use
  • Run same application stand-alone
  • Centralized place for programmers to control
    large system
  • Realistic set of services for distributed
    applications

5
Avoid per-packet events
  • Simulate message flows instead of packets
  • A message flow is a contiguous block of bytes
    passed to the transport layer for delivery
  • No model for lower levels of the network

Higher Layers
message flow
Transport Layer
segmented message
6
Simple network model
  • Topology
  • End hosts are limited to one connection
  • Routers can have unlimited connections
  • No restriction on latencies of any connections
  • Support for asymmetric links (sort of)
  • Two network models for bandwidth so far
  • Naïve network model
  • Assumes full link bandwidth for each flow
  • Used for simulations that count network hops
  • Bandwidth-share network model
  • Accounts roughly for traffic interdependencies

7
Bandwidth-share network model
  • Bottleneck assumed to be at the edge of the
    network
  • Maximum bandwidth between two hosts is the
    minimum of the source and destination first-link
    bandwidths
  • Inappropriate for simulating some topologies
  • Allows us to avoid modeling routers, Internet
    internals

Maximum bandwidth Min(BandwidthS, BandwidthD)
Internet
BandwidthS
BandwidthD
8
Bandwidth allocation
  • Take the nominal bandwidth and divide by the
    number of flows sent or received by that host
  • Calculate the bandwidth of the source and
    destination hosts of a link, and allocate the
    minimum of the two to the flow
  • Minimum-share allocation

Min(2.5, 1.67)
10Mb
5/3 1.67
10/4 2.5
9
Bandwidth reallocation
  • Example
  • Flow a completes
  • Flow b might now use all of Ys bandwidth
  • Reallocate bs bandwidth to be the minimum of
    bandwidth available from Y and Z
  • Flows c and d do not need to be rescheduled,
    because number of flows at Z has not changed

c
a
b
d
10
Results
  • Compare results of bandwidth-share model to ns
  • Time to perform the simulations
  • Memory usage
  • Accuracy
  • Run identical simulations
  • Topology generated by GT-ITM topology generator
  • 600 nodes, no bottleneck links between end hosts
  • Avg. round-trip path latency in ns 96ms, max. is
    190 ms
  • Avg. round-trip path latency in Narses 88ms, max.
    158 ms
  • Flows sent between random hosts, flows all the
    same size
  • 1GHz Pentium III with 512MB RAM running RedHat 7.2

11
Runtime (10,000 flows)
12
Memory consumption (5000 to 20000 flows)
  • Flow size 200KB
  • At 40,000 flows, ns thrashed

13
Accuracy (10,000 flows)
14
Impact
  • Used for simulating several distributed
    applications
  • Byzantine fault-tolerant protocols
  • CUP
  • LOCKSS
  • Other peer-to-peer applications
  • Considered easy to incorporate application
  • Easy to run application stand-alone instead
  • Application must be written in Java currently

15
Future work
  • Simulate larger topologies
  • Calculating route table information in ns is
    current bottleneck
  • Look at JavaSim technology
  • With help from Prof. Hou
  • Implement UDP model
  • Narcissus
  • How can I tell if a topology is appropriate?
  • Performance improvements
  • Use calendar scheduler such as ns does
  • Implement our own object manager
  • Avoid Java garbage collection overhead
  • Model congested back channels
Write a Comment
User Comments (0)
About PowerShow.com