Title: High Speed Optical Networks: An Evolution of Dependency November 2, 2001
1High Speed Optical NetworksAn Evolution of
Dependency November 2, 2001
- Todd Sands, Ph.D
- WEDnet Project
- www.wednet.on.ca
- University of Windsor
2Latency
- The result of an event in time that slows the
transport or processing of information - E.g. Machine (processing) latency in microsecs (n
1.2) - E.g. Network latency in millisecs (x lt 130 ms)
- Optical transport max. 300,000 km/sec
- Physical parameters of the transport media
- Convergence of voice, image and data in the path
- Switched cells and packet network behaviours
- Potential of WDM optically switched and SONET
architectures
3OSI Reference Model Networking 101
- Application
- Presentation
- Session
- Transport
- Network
- Data-link
- Physical
- When two computers communicate on a network, the
software at each layer on one computer assumes it
is communicating with the same layer on the other
computer. - e.g. For communication at the transport layers,
that layer on the first computer has no regard
for how the communication actually passes through
the lower layers of the first computer, across
the physical media, and then up through the lower
layers of the second computer.
4Do we know the effects of latency!
- Suspect that the answer is yes! We see it every
day! - No. of processors, power requirements, processing
capability, storage capacity, and the needs of
research that use most facilities can be
intensive. - HPCS resources supplied and funded through a
needs-based process, but this can also be because
of research - What about a GRID? Is it on the same path?
- Are we mindful of details, such as latencywith
respect to one of the most fundamental parts of
the GRID THE NETWORK - Do we know how computing resources connect to the
outside world?Maybe - Do we have any control over the extranet?
5Primary Network Interface To Machine
Resources These switches provide Ethernet to ATM
SONET WAN interfaces for TCP/IP traffic
6PACKETS VS. CELLS VS. FRAMES
- Frames used for larger data amounts over
high-speed, low error rate links - 2,000 10,000 characters in size
- Data corrections not link by link
- Therefore link by link error checking impacts
network latency greatly - Packets used for smaller data amounts across
lower speed, high error rate links - 128 256 (bytes) characters in size
- Lower chances of error in each packet, small
amounts re-transmitted - Prioritization through tagging of packets leads
to QoS - Cells very small amounts of data with sometimes
no error checking - Highly reliable optical networks sometimes with
no error checking - Up to 48 - 53 (bytes) characters in size
- Small size allows for load balancing of traffic
on network - No payload in cells, no transmission - full
payload, then transmission - Uses ATM Adaptation Layers AALs 1-5 for
shaping the network
7Optical Carrier Designations
- OC-1/STS-1 51.84 Mbps
- OC-3 155.52 Mbps
- OC-12 622.08 Mbps
- OC-48 2,488.32 Mbps
- OC-192 9,953.28 Mbps
- OC-768 39,813.12 Mbps
8SONET
- digital hierarchy based on Optical Carriers
(OCs) - maximum t-speed of 39.81312 Gbps
- defines a base rate of 51.84 Mbps STS-1s
- OCs are multiples of the t-speed
- defines Synchronous Transport Signals STSs and
STS-3c OC 3 155 Mbps
9Overheads
- SONET carries 8,000 frames per second, 810
characters in size (36 characters of overhead and
774 characters of payload - Section Overhead includes
- STS channel performance monitoring
- Data channels for management such as channel
monitoring, channel administration, maintenance
functions and channel provisioning - Performs functions necessary for repeaters, add
drop multiplexers (ADMs), termination gear, and
digital access and cross connect systems (DACS) - Line Overhead includes
- STS-1c performance monitoring
- Data channel management, payload pointers,
protection switching information, line alarm
signals, and far-end failure to receive
indicators - In addition to these overheads there are also
Path overheads
10(No Transcript)
11Optical Wave Division
- WDM multiplies (up to 32 more times) the capacity
of existing fibre spans cross (wide)-band,
narrow band or dense band transmission options - DWDM Red waves 1550, 1552, 1555 1557 nm
- DWDM Blue waves 1529, 1530, 1532 1533 nm
- Now can support 100 wavelengths with each
wavelength supporting a channel rate of up to 10
Gbps
12(No Transcript)
13(No Transcript)
14Local Area Access Architectures
1-Meg or xDSL Modem Services in Communities
Alternate Carrier MANs also Interface
Central CO for Access Nodes
Access Routers
Off Ramps -WDM
PVCs on carriers network
1000 Mb GbE
GbE
ATM Network OC12-OC48
Router
1 M M
OC-12 ATM
GbE
1 M M
Grid Access Node GigaPoP?
System Processors and Interfaces 100 Mb- 1Gb
- All PVCs (SVCs or PVPs) usually terminate on 1 or
more Centralized Access Routers - Most carrier PVCs are UBR with access at minimum
OC48 speeds 2.4 Gb/sec - Backbone may be optically switched with P.O.S on
wavelengths using TCP/IP as the main transport
protocol but getting direct access to it is the
key! - Direct access will also minimize latency and the
synergistic effects of latency
15(No Transcript)
16(No Transcript)
17(No Transcript)
18(No Transcript)
19(No Transcript)
20(No Transcript)
21What does a 5 minute average measurement show us
with MRTG?
22Network Protocol Stack Models (WAN with IP)
Ethernet Switch (Catalyst)
Network (ATM)
LAC (SMS-1000)
1MM DBIC
1MM
PC
LNS
23(No Transcript)
24Making a Call
WRH Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
25Making a Call
WRH Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
26Making a Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
27Codec Negotiation
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
28Successful Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
29Making an ISDN Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
30Making an ISDN Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
31Making an ISDN Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
32Making an ISDN Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
33Codec Negotiation
Leamington District Memorial Hospital
Centrex module
34Successful Call
WRH
Western
Campus
HDGH
LE25
OC3
FVC VGATE
25 Mb ATM
LE 25 SMF
FVC V-room
P-Tel Video
Dial - up
Shared
H.261 ISDN
University
of
Windsor
Video
Television
Leamington District Memorial Hospital
Centrex module
35(No Transcript)
36(No Transcript)
37(No Transcript)
38(No Transcript)
39(No Transcript)
40(No Transcript)
41(No Transcript)
42(No Transcript)
43(No Transcript)
44(No Transcript)
45(No Transcript)
46(No Transcript)
47(No Transcript)
48ATT and Regional Gigapop IP Architecture
CAnet3
ATT Gigapop
iBGP
BGP
ATM /w SVC
ATT Route Server
iBGP
Regional IGP
OCRINet /wOHI
ATT Network
WEDNet
SureNet
iBGP
iBGP
ATM interconnectivity
Regional IGP
Regional IGP
Router / RFC1577 Client
CANet AS iBGP
LAN interconnect
ATT AS iBGP
49(No Transcript)
50(No Transcript)
51From LAN to WAN This server and control facility
houses multiple Digital Alpha, DEL PowerEdge, IBM
Netfinity and RS/6000 servers. Located at a
single campus the facility supports 400 nodes
locally and 800 nodes 7.5 km away. SVCs are
provisioned on separate PVPs for security and
LANE services provide VLANs for ADT systems,
pharmacy, and document imaging. The systems use
GUI interfaces to assist visual references for
end-users
52(No Transcript)
53(No Transcript)
54In the Ideal World!
- Dark fibre between nodes
- Homogenous switched architecture with minimal
breakouts - Low latency at all layers
- We will likely be dealing with something much
different, unless there is about 500 M available
to support and sustain the network side of grids
to help minimize the synergistic effects of
latency on applications - Latency studies are important and the synergy of
latency effects are important from the processor
to the I/O architectures, to the network layers - If commercial carriers are to be used anywhere in
the path, latency should become a factor for
selecting them as providers - Effective monitoring and support of the extranet
is important to the success of a GRID unless the
GRID middleware can accommodate different types
of latency and the variation that exists - Internet routing is best effort with variable
paths every time not likely the best GRID
platform - Research networks like CAnet 3, Internet 2,
ORION, etc. are the next best bet! However, the
last mile issue still has to be addressed.
55The Future
- It is conceivable that future Internet networks
may be a seamless composite of a variety of
transport protocols. An Optical Internet might be
used for high volume, best efforts computer to
computer traffic, while IP over ATM might be used
to support VPNs and mission critical IP networks,
while IP over SONET would be used to aggregate
and deliver traditional IP network services that
are delivered via T1s, DS3s, and Gigabit uplinks - From, Dr. Bill St. Arnaud, CANARIE