Title: The MBNG project is a major collaboration between different groups' This is one of the first project
1The MB-NG project is a major collaboration
between different groups. This is one of the
first projects to bring together users, industry,
equipment providers and leading edge e-science
application. Technically, it enabled a leading
edge U.K. Diffserv enabled network running at 2.5
Gbit/s configured and demonstrated the use of
MPLS traffic engineering to provide tunnels for
preferential traffic deployed a middleware to
dynamically reserve and manage the available
bandwidth-on a per-flow level-at the edges of the
network investigated the performance of end-host
systems for high throughput deployed and tested
a number of protocols designed to tackle the
issue of standard TCP in long fat pipes and
finally demonstrated the benefits to the
application of the advanced network environment.
TCP and High Throughput
Middleware GRS - Grid Resource Scheduling
QoS
- Standard TCP transfers data memory-to-memory
across MB-NG at 941 Mbits/s. This is the maximum
line rate for TCP.
- The MB-NG network is QoS enabled with three
classes using DiffServ.
- WHAT IT IS
- Middleware component to reserve network bandwidth
dynamically - Based on a model where QoS is managed locally at
each edge site and the bottleneck is at the edge. - HOW IT WORKS
- A Network Resource Scheduling Entity (NRSE)
manages a single site and stores information
about local network resources and users - A request can be issued via a GUI (from an
end-user) or an API (from an application) - Authentication is performed locally on the local
user and then between NRSEs to improve
scalability and to support multi-domain
operation - Bi-directional reservations that require
bandwidth to be reserved on both directions are
supported - Reservations between any two sites can be
initiated from a third remote site - GRS AND MB-NG
- MB-NG is the first deployment on a WAN of GRS
- NRSE has a locally-programmable back-end to
ensure that the router configuration is
consistent and correctly restored after the
reservations are completed - Traffic that matches the reservation parameters
is marked and guaranteed enough bandwidth before
entering the core in the edge router. - FUTURE GOALS
- Currently planning a version to work in an
environment where bottlenecks may occur anywhere
in the network
Congestion point 2.5Gbit/s
24 hours continuous Transfer TCP mem-mem at line
rate.
Manchester
Traffic flow
London
RAL
2 Classes Voice 20, BE 80
QoS switched on
- An issues with standard TCP is its performance in
high bandwidth-delay networks - New TCP stacks being proposed to deal with thid
issue (HSTCP, STCP, H-TCP, FAST, .) - In low RTT-high bandwidth environment, standard
TCP performs just as well as the new stacks. In
high RTT-high bandwidth environments, the new
stacks are more reactive to losses.
3 Classes EF 33, BE 57, LBE 10
QoS switched on
TCP in high bandwidth networks Short RTT vs long
RTT
RTT6ms
RTT120ms
RAID Studies
Disk-to-disk performance across MB-NG using RAID5
- Read speed
- line rate.
- Write speed
- For small files less than 400MBytes line rate
- For larger files 600 Mbits/s.
- Optimal performance obtained using optimal
hardware configuration - RAID 5 disk arrays give high read/write speeds
together with built in redundancy to ensure fault
tolerance.
Max read speed 1300 Mbit/s
Middleware GARA General-purpose Architecture
for Reservation and Allocation
Applications Reality Grid
Write speed (large files) 600 Mbit/s
- Developed as part of the Globus project.
- GARA provides end-to-end QoS to the applications
using three types of Resources Managers (RM) - In our case, we just make use of the Network RM
(Differentiated Services). It allows immediate
and advance reservations. Parameters needed in a
reservation are - Reservation type network (or cpu, disk)
- Start Time seconds from Epoch
- Duration seconds
- Resource-specific parameters e.g. bandwidth
Realtime remote visualisation
- Processing in London, visualisation in
Manchester. - Without QoS, the level of background traffic
affects the application performance. - With QoS, the applications is protected from the
background traffic. - QoS setting 10 (230 Mbit/s) of 2.5 Gbit/s
bottleneck reserved for EF. The average
application throughput of 65 Mbit/s is sufficient
for a usable refresh rate
Visualisation Server
Visualisation client
Steering
Simulation data
Through the use of MB-NG Reality-grid The
TeraGyroid project won the HPC Challenge Award
for Most Innovate Data-Intensive Application at
SuperComputing 2003 in Phoenix, Arizona.
Manchester
Computation node
London
MPLS Multiprotocol Label Switching
- BASICS
- Layer 2.5 switching technology developed to
integrate IP and ATM. - Forwarding based on label switching
- Traffic Engineering extensions allow the use of
different routing paradigms compared with routing
based on the shortest path as found in IP
networks - MPLS Tunnels, using RSVP, help with emulating
virtual Leased Lines - RSVP allows for easy accounting and better
utilization of all the available bandwidth - Provides reroute techniques comparable with SONET
in terms of speed - Other possible use of MPLS (VPNs, AToM, etc) use
different protocols.
- MPLS MBNG
- Deployed in the core of the MB-NG network
- Carried extensive testing to check capabilities
of Tunnels in respect of bandwidth reservation - Because RSVP works on the control plane only, QoS
still need to be extensively deployed. - CONCLUSIONS
- MPLS with Traffic Engineering extensions helps in
enabling efficient utilization of available
networks resources - Tunnels ease end-to-end traffic management but
are not a complete solution to bandwidth
allocation - QoS needs to be deployed all over the MPLS core.
- Raid0 with 4 disks in the array.
- Transfer of 2 Gbyte files from London to
Manchester - GridFTP average throughput of 520 Mbit/s APACHE
average throughput of 710 Mbit/s.