Title: The Kent Ridge Advanced Network (KRAN)
1The Kent Ridge Advanced Network (KRAN)
APAN meeting, Fukoka, Japan24th January 2003
- Lek-Heng NGOH PhD
- Deputy Director, SingAREN
- Research Manager
- Institute of Infocomm Research
- ASTAR, Singapore
2Goal
- To research and develop an advanced
IP-over-optical network infrastructure with
support for grid computing
3Approach
- Work Focuses on the following Layers
- Advanced IP Layer
- Optical Layer
- Grid Middleware Layer
4Approach
- Design and setup optical testbed
- Test and evaluate three emerging LAN/WAN
technologies GE, POS and RPR - Trial and study of optical plane signaling and
control solutions - Evaluate and test KRAN with grid middleware
applications - Conclusion
5Timeline
Tender Process optical technologies selection
KRAN Formation
Network design
1 Mar KRAN launch
13 May CSCO/SCSSolutionImplemented
25 Apr KRAN kick-off1st SCM
15 Mar ASTAR grant
12 Apr Closed Tender
Network connectivity, IP addressing and
configuration
Detailed Test plans logistics planning
Staging Tests
Project Planning
1 Jul Officialstaging
10 Jul Equipment Arrival
11 Jul 1st power up test
18 Jul 2nd SCM
1 Jun BII-trainee
6Time Schedule
Outdoor tests
Late Dec to Early Jan 03
Early Mar 03
End Aug 03
Early Oct
Early Nov
Early Dec
Complete RPR indoor
Complete POS indoor
Complete GE indoor
Deployment
Complete outdoor tests
Application tests
Items in red are completed. The progress of the
KRAN project is on schedule.
7KRAN Project Working Group
- Wong Yew Fai (CC)
- Wong Chiang Yoon (LIT)
- Nigel Teow Teck Ming (BII-CC)
- Cisco Systems
- SCS (Singapore) Ltd,
8(No Transcript)
9Detailed Physical Map
NUS, CC
IMCB
NUS, EE
SoC
I2R, BII
10 11Staging Connections
AC-DC
15194
10720
10720
SMB
SMB
Attenuator
10720
12Deployment Connections
AC-DC
10720
15194
SMB
0.75km
NUS
Fibre Drum
1km
10720
10720
SMB
13Addressing and Naming
172.18.44.0/24 Loopback 0 to 31 Backbone 32 to
63 CC 64 to 127 I2R 128 to 191 SOC 192 to 254
172.18.36.1
172.18.36.252
Switch (.1)
33
/30
34
66
CC (.2)
41
37
/30
/30
42
38
130
/30
I2R (.3)
SOC (.4)
194
44
45
14Project Plans
- 9 main major items to test
- Throughput/Delay/Loss/Jitter
- QoS
- Fault Recovery
- Service Provisioning
- Network Management
- IP support
- Multicast
- MPLS
- Others
15Apparatus Used
- SmartBits as Traffic Generator
- SmartFlow software to drive SmartBits
- 3 x 10720 routers
- 1 x ONS15194 IP traffic aggregator
- 6 x 15km fiber drums
- 6 x 10dB attenuators
- Relevant fiber patch cords
- Optional Catalyst 3550 switches
16 17Item 1- Throughput
Test Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) Distance (km) 1 15 1 15 1 15
Throughput, Delay, Jitter, Loss Raw Multicast ? ? ? ? ? ?
Throughput, Delay, Jitter, Loss Raw Unicast ? ? ? ? ? ?
Throughput, Delay, Jitter, Loss UDP UDP ? ? ? ? ? ?
Throughput, Delay, Jitter, Loss TCP Single Conn. ? ? ? ? ? ?
Throughput, Delay, Jitter, Loss TCP Multiple Conn. ? ? ? ? ? ?
May subject to changes
18Network Performance (4)
- Throughput Packets sent w/o lost
- Thruput is better for large frame sizes
- Limitation of router to handle too many
packets/sec - For large frame sizes, thruput approaching line
rate
19Network Performance (5)
- As loading inc, frame lost inc.
- Frame Lost is huge and starts at low loading
conditions for small frame sizes - Same reasoning router limitation
- For large frame sizes (gt512 bytes), lost is about
7 at 2.6G loading.
20Network Performance (6)
- As loading inc, latency inc.
- Again, large frame sizes outperform small frame
sizes - 3 platforms
- Lowest is minimum time it takes packets to
traverse about 22.5km - 2 other queues (e.g. interface and processor)
21Network Performance (7)
- As loading inc, latency dev inc. (intuitive)
- Similarly, large frame sizes outperform small
frame sizes (router limitation) - Platforms also evident due to queuing inherits
from the latency graphs earlier
22Network Performance (8)
- What is presented is only a portion of the
experiments conducted. - Other experiments include
- Using attenuators, instead of fiber drums
- Stressing the GE/FE module instead of the RPR
module - Driving symmetric traffic (1.3 1.3) rather than
asymmetric traffic (2 0.6) - TCP/UDP/IP testing
23Network Performance (9)
- Some conclusions include
- Fibre drum (7db) results better than attenuator
(10dB) results - GE/FE module does not handle 2.6G of input
traffic and creates a bottleneck even before
packets can be sent out of RPR interface. - No difference between TCP/UDP in terms of frame
loss, latency and latency standard deviation. - Multiple TCP flows and single TCP flows do not
affect performance.
24Throughput Test
- POS results poor (hardware card related)
- RPR better for larger frame sizes.
- GE seemingly better for smaller frame size.
- GE (routers) worse than GE (switches) because of
IP processing overheads
25Frame Loss Test
- Related to throughput results
- RPR performs best at large frame sizes
- GE (switching) is generally better than other
technologies (except RPR large frame size) - POS results are the worst again due to hardware
card.
26Item 2 - QoS
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
QoS Voice ? ? ? ? ? ?
QoS Video ? ? ? ? ? ?
QoS Data ? ? ? ? ? ?
May subject to changes
27Example Test Item RPR QoS
KRAN07-R2-QoS-RPR.doc
10720
Configured SRP queues on all 10720s
10720 maps SRP/bits to appropriate traffic
2.4GB RPR Ring
10720
10720
0.48GB
SMB measures Throughput, Delay, Jitter, Loss
SMB measures Throughput, Delay, Jitter, Loss
28Layer 2 QoS Testing (4)
HI
LO
7
0
5
6
7
7
0
0
80
20
Slicer
SRP 5 7 goes to HI queue, the rest goes to
default LO queue
SRP transmit interface
29Item 3 - Fault
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
Fault Recovery Time Node ? ? ? ? ? ?
Fault Recovery Time Link ? ? ? ? ? ?
Fault Recovery Time Links ? ? ? ? ? ?
May subject to changes
30Example Test Item RPR Fault
KRAN10-R1-QoS-fault.doc
10720
2.4GB RPR Ring
10720
10720
SMB measures Throughput, Delay, Jitter, Loss
SMB measures Throughput, Delay, Jitter, Loss
31Fault Recovery
- RPR (IPS) recovers in less than 5ms, well within
50ms telecom standard for voice. - POS recovers in 7.5s
- GE (STP) recovers in almost 1 min.
- The GE (RSTP) recovers in about 1.65s.
- RPR is the clear winner
32Item 4 Service Provisioning
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
Svc Provisioning Ease of node addition, removal, auto-configuration ?? ? ?? ? ?? ?
May subject to changes
33Item 5 Network Management
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
Network Management SNMP MIBs ? ? ? ? ? ?
May subject to changes
34Item 6 IP Support
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
IP Support Multicast ? ? ? ? ? ?
IP Support QoS ? ? ? ? ? ?
IP Support Reroute ? ? ? ? ? ?
May subject to changes
35Item 7 Multicast
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
Multicast Layer 2 ? ? ? ? ? ?
May subject to changes
36Item 8 - MPLS
Test Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) Distance (km) 1 15 1 15 1 15
MPLS VPN Layer 2 ? ? ? ? ? ?
MPLS VPN Layer 3 ? ? ? ? ? ?
MPLS Fast Reroute Fast Reroute ? ? ? ? ? ?
May subject to changes
37Item 9 Others
Test Test GE GE POS POS RPR RPR
Distance (km) Distance (km) 1 15 1 15 1 15
Others Spatial Reuse ? ? ? ? ? ?
Others BW Fairness ? ? ? ? ? ?
May subject to changes
38Optional Items
- IPv6
- Security Features
- Jumbo Frame Support
39Time Table
Deploy best network
Switch-over
Mid-Jul End Aug(5 wks) Early Sep Mid Nov (10 wks) Mid Nov Mid Jan (8 wks) Mid Jan End Feb (6 wks) Mar03 Aug 03(6 mths)
MPLS, Svc Pro, Fault, IP Mcast, Mcast, IP reroute RPR POS GE Application Layer Projects
MPLS, Svc Pro, Fault, IP Mcast, Mcast, IP reroute QoS, IP QoS, IP reroute, MPLS VPN, Throughput/Delay, SNMP, SRP QoS, IP QoS, IP reroute, MPLS VPN, Throughput/Delay, SNMP, SRP QoS, IP QoS, IP reroute, MPLS VPN, Throughput/Delay, SNMP, SRP Application Layer Projects
Staging Deployment Deployment Deployment Application
40Deliverables
- 1 x Safety Document (end July) - Done
- 1 x RPR indoor Test Report (mid Oct) - Done
- 1 x POS indoor Test Report (mid Nov) - Done
- 1 x GE indoor Test Report (mid Dec) Almost Done
- 1 x Staging Test Report (early Jan) in progress
- 1 x Final Report (End Apr)
41Evaluation
- Inferring from the experimental results,
- GE is strong in Network Stress QoS Pricing
- POS is strong in Multicast
- RPR is strong in QoS Fault recovery
- If not for fault recovery, GE may be a good
choice for many networks.
42Evaluation
- However, a more systematic approach has been
considered to determine the best of the three
techs (RPR, POS, GE) - For each category (e.g. stress, QoS, Fault
recovery), ranking was given. - Weights are assigned to each category depending
on network requirements. (e.g. if the network
requirement is strict on fault recovery times,
then the fault recovery category will receive
higher weigtage than other categories.)
43Evaluation
- A rank of 3 is better than 2, and 2 is better
than 1
44Evaluation
- Other categories (QoS, Stress, etc.) are ranked
similarly. Table below briefly illustrates. The
actual ranking has more details.
- NB A rank of 3 is better than 2, and 2 is better
than 1
45Evaluation
- Weights (example weights in blue) are assigned to
each category depending on its importance on the
user network.
46Evaluation
- Preferred tech based score on the product of
the two matrices (weight matrix and Tech Eval
matrix).
47Evaluation
- The table indicates that GE has the highest score
of 30 and is the most desired tech for the given
weights.
- Suppose weights were given to favour fault
recovery timings more than pricing, RPR would
have been the winner.
48Conclusion
- All indoor tests have been completed.
- Experimental results were presented (fault
recovery, stress test, QoS, multicast). - All 10720 routers have been deployed at CC, SOC
and I2R. - Backbone connectivity between deployed nodes are
up. - Half the milestones were achieved and more than
half of the deliverables were completed. - Will commence outdoor tests.
- Evaluation of the best tech after comparisons
were provided. - GE -gt QoS, Stress, Pricing
- POS -gt Multicast
- RPR -gt QoS, Fault Recovery
49 50Objectives
- To experiment and identify suitable optical
network signalling and control software solutions
(GMPLS, OGSI) for the following cross-layer
activities - traffic Engineering/QoS Management
- Fault Protection and Recovery
- To support Data-in-Network Research
51GMPLS-based Control Plane Functions
VIN
Traffic Engineering
Protection Recovery
IP Channel (KRAN)
Optical Channel (ONFIG-GMPLS)
52GMPLS Software
53KRAN Optical Plane
54KRAN Optical Node
55 56Objectives
- 1.) To develop test methodologies and
instrumentation techniques for the measurement
and evaluation of Grid middleware performance
over KRAN - 2.) To further quantify key network parameters
(fault recovery, QoS etc.) for the purpose of
supporting Grid middleware and applications
57(No Transcript)
58Thank You!