Title: Dan Magorian
1Whats MAX Production Been Up to?Presentation to
MAX membership
Fall 07 Member Meeting
- Dan Magorian
- Director of Engineering Operations
2What has the Production Side of MAX been doing
since the Spring member meeting?
- The last 6 months has been one of the biggest
network changeovers in MAXs history! (The Big
Move) - We interviewed many potential dwdm system
vendors. - Did a pseudo-RFP process with UMD procurement
helping. - Worked with our technical advisory committee
(TAC, thanks guys!) - Took field trips to the short-list vendors sites
for lab testing. - Selected a vendor (Fujitsu Flashwave 7500).
- Got the PO expedited through UMD Purchasing in
record time. - Got delivery expedited through Fuijitsu.
- Installed in MAX lab, configured, and out to
field in one month. - Inc lot of prep, ripping out Movaz dwdm systems,
customer coordination, and cutover in 3 main
pops. - Phase 1 of Fujitsu Flashwave 7500 system has been
selected, procured, and installed in record
time!!
3Example Baltimore DWDM installation timetable as
of July. Slipped about a month, was still very
aggressive.
- 7/27 Parts arrived, breakers changed, Fujitsu
7500 PO cut. - 8/3 Norwin and DaveI move Force10, install MRV
10G - 8/3 Dan and MAX folks have 10G lambda Mclean
ready - 8/10 Dan and MAX engineers at Fujitsu training TX
- 8/17 Balt peerings moved to Force10s/T640s
- 8/17 Bookham filters arrived, Aegis power mons
installed. - 8/24 M40e, Dell, 6 SP colo rack 3 cleared.
- 8/24 (depends on Fujitsu ship) Fujitsu gear
staged in lab - 8/31 Fujitsu DWDM and switch installed in colo
rack 3 - 9/7-14 Move participant peerings to lambdas on
Bookhams - 9/21 Mop up of Aegis power monitor mrtgs, etc
4Where are we today vs April?
- 3 main pops in Mclean (LVL3), College Park, and
6 St Paul Baltimore almost complete. - This included a major move of our main UMD pop
- into the NWMD colo area of UMD bldg 224 room 0302
- moving out of OIT colo area of UMD 224 room 0312.
- involved renovating the racks, moving over MD
DHMH gear - tie fibers, unfortunately coupled with huge fiber
contractor hassles - NWMD folks (Greg and Tim) were very helpful.
- Still to be done
- tie fiber in 6 St Paul (do it ourselves next week
with our new Sumitomo bulk fusion splicer) - Finish up of BERnet dwdm filter cutovers
- Phase 2 dwdm, replacing ancient 2000 Luxn dwdm DC
ring. - Very proud that we moved all the customer and
backbone lambdas with only tiny amounts of
downtime for the cuts! - Especially want to thank Quang, Dave, Matt, and
Chris!
5In addition to the dwdm changeover, the other pop
moves have been a huge piece of work also
- In Mclean, had to move the HOPI rack to
Internet2s suite - In Baltimore, were sharing USMs Force10
switches, and removed the BERnet Juniper M40e.
Lot of cutover work. - Moved the NGIX/E east coast Fednet peer point
- Procured, tested in lab, and installed in new
CLPK pop. - Including a lot of RMA problems with 10G ints.
- Lot of jumper work, config moves and night cuts
to get done. - Monday just moved out CLPK Dell customers.
- Next up moving the lab T640 to the new CLPK pop,
new jumpers, config move and consolidation. - We had intended to have new dense 1U Force10 or
Foundry switches selected and installed - But found that their OSes were immature/unstable.
- Had to do an initial one in Equinix pop to
support new 10G link. - So made decision to consolidate onto Cisco 6509s
for Phase 1, postpone Phase 2 switches till
spring 08.
6RR402 Before 48V PDUs, Dell switch and
inverters, Force10 (top), Juniper M40e (bot)
7RR402 After Fujitsu ROADM optical shelf (top),
transponder 1-16 shelf (bot) with 2 10G lambdas
installed, Cisco 2811 out-of-band DCC router
with console cables, Fujitsu XG2000 color
translator XFP switch. Still to be
installed transponder 17-32 shelf to hold space.
8RR202 after Force10 E300 relocated (top, still
needs to be moved up), 3 Bookham 40ch filters,
Aegis dwdm power monitor, tie fiber panel to RR402
9MAX Production topology Spr 07
State of MD pop, Baltimore
BALT
M40E
1. Original Zhone dwdm over Qwest fiber
2. Movaz dwdm over State Md fiber
3. Gige on HHMI dwdm over Abovenet fiber
Ring 4
4. Univ Sys Md MRV dwdm, various fiber
Level3 pop, Mclean VA
CLPK
NGIX
Ring 2
CLPK
MCLN
T640
T640
UMD pop, College Park MD
CLPK
MCLN
Ring 1
Ring 3
DCGW
DCNE
ASHB
ARLG
Equinix pop, Ashburn VA
GWU Qwest DC pops, ISI/E Arlington VA pop
10MAX Production topology Fall/07
Baltimore pops
660 RW
1. Original Zhone dwdm over Qwest fiber
6 St Paul
2. Fujitsu dwdm over State Md fiber
Prod Ring 4
New Res fiber
3. 10G on HHMI dwdm over Abovenet fiber
4. 10G on Univ Sys Md, MRV dwdm
10G lambda
Level3 pop, Mclean VA
CLPK
NGIX
Ring 2
CLPK
MCLN
T640
T640
UMD pop, College Park MD
10G backbone
CLPK
MCLN
10G lambda
Ring 1
Ring 3
DCGW
DCNE
ASHB
ARLG
Equinix pop, Ashburn VA
GWU Qwest DC pops, ISI/E Arlington VA pop
11BERnet client-side DWDM approach
JHMI
JHU
MIT 300 Lex.
40 wavelength MUX w ITU XFPs Client Side Path
6 St. Paul
660 Redwood
UMBC
New 40 wavelength Fujitsu dwdm
One Transponder Pair to Pay for and Provision End
to End
MCLN NLR I2
College Park
(UMBC is connected 6 SP, Also Sailor and Morgan
joined)
12More Client Dwdm examples between participants
XFP pairs on an assigned wavelengths on 40
channel dwdm filters (40 km reach 6K). Red
lambda is to DC, blue to NYC, green local to UMBC
JHU switch
6 St. Paul
660 Redwood
300 W Lexington
UMBC
All each participant fiber needs is filter pair,
not full dwdm chassis
13BERnet Production L1/L2 topology as of Nov
USM F10 6 St Paul
USM F10 Redwood St
BERnet Participants
BERnet Participants
USM MRV
USM MRV
Xconnect New BERnet Res DWDM
Xconnect New BERnet Res DWDM
Participant Production lambdas
New 10G lambdas
BERnet Production 10G lambda
USM MRV
MAX Fujitsu Dwdm
MAX 6509 CLPK
New MAX 6509 MCLN
CLPK T640
MCLN T640
Phase 2 DC ring
14Next Steps for the Big Move
- NGIX 6509 chassis just freed up moves next week
to MCLN installation with connecting 10G lambda.
This is the start of MAXs Layer 2 service
offering. - USM finishing optical work on 660-6SP MRV dwdm
link. - Will put in BALT production 10G to MCLN, allows
protected double peerings with MAX T640s. - 40 channel filter installs 6 SP/660 RW ends in
(except Sailor), need to install/test participant
ends, transition fibers from 1310 to dwdm
660-6SP, JHU/JHMI, UMBC, Sailor, Morgan. Also
Pat Garys group at CLPK. Then bring up sfp/xfp
lambdas on, set up Aegis power mons /web pages. - Move of CLPK Juniper T640 to new pop next big
one. - Hope to have all pops move done by end of
Dec/early Jan. Happy to give tours!
15Phase 2 of dwdm system
- In spring will continue to unify new Fujitsu dwdm
system. - Ironic
- Phase 2 is replacing our original Luxn/Zhone
system from 2000 - While Phase 1 was replacing the Movaz/Advas that
came later. - Those reversed due to the need to get main Ring 2
changed first. - So now were moving on to change over the
original ring 1. - Luxn/Zhone dwdm now completely obsolete, really
unsupported. - Still an issue with less traffic to DC. One 10G
will hold L3 traffic to participants for awhile. - Very interested in hearing/collaborating on DC
lambda needs. - There is an initiative with the Quilt for
low-cost lambdas, which were hoping will result
in Qwest offering to MAX and rest of community,
feed lambdas from original DC Qwest pop. - Get involved with TAC to hear details
tac_at_maxgigapop.net
16Participant Redundant Peering Initiative
MCLN router
Fujitsu dwdm
Have been promoting this since 2004. But now
want to really emphasize that with new dwdm
infra- structure we can easily double-peer your
campus to both routers for high-9s availa-bility.
8 folks so far.
CLPK router
Fujitsu dwdm
USM
NIH
JHU
17RFC2547 VRFs (separate routing tables) expansion
gives participants choice
- Due to Internet2 and NLR merger not happening,
converged network does not appear in the cards. - This means business as usual I2 and NLR acting
as competitors dividing community, trying to pull
RONs to their side, increasingly acrimonious. - We intend to do our best to handle this for folks
(to the extent possible) by playing in both
camps, and offering participants choice. - So have traded with VT/MATP for NLR layer 3
PacketNet connection, in addition to (not
replacing) I2 connection. - Technically, have implemented this on Juniper
T640 routers as additional VRFs separate
routing tables which we can move participant
connections into. - Dave Diller is chief VRF wrangler, did tricky
blend work.
18MAX has run VRFs for years, now has 5
MAX Infrastructure
Cogent VRF
I2 NLR Blended VRF
Qwest VRF
NLR VRF
I2 VRF
Participant Peering Vlans
19New Service Layer 2 vlans
- Announced in Spring member meeting.
- MAX has traditionally run Layer 1 (optical) and
Layer 3 (routed IP) service. - Only NGIX/E exchange point is Layer 2 service.
- Continues to be demand for non-routed L2 service
(vlans), similar to NLRs FrameNet service. - This means that folks will be able to stretch
private vlans from DC to Mclean to Baltimore over
shared 10G channel. - Also will be able to provision dedicated
ethernets. - Next week were moving Cisco 6509 out to Mclean
early to get this started, will interconnect two
main switches with 10G lambda. - Havent figured out service costs yet, will
involve TAC. - Your ideas and feedback are welcome.
20MIT Nortel dwdm Boston
MIT Nortel dwdm Albany
Long-distance high-level diagram of newdwdm
system (meeting MITin Baltimore)
MIT Nortel dwdm NYC
Lambdas to Europe
MIT Nortel dwdm BALT
BERnet dwdm 6 St Paul
BERnet Participants
BERnet Participants
NLR and I2 lambdas
MAX dwdm MCLN
MAX dwdm CLPK
21New service Flow analysis
- We announced this in the spring. Turns out it
would be useful to be able to characterize
traffic flows passing thru MAX infrastructure for
participant use. - We bought Juniper hardware assists and big Linux
pc with lots of disk to crunch and store a years
data. - Using open-source Flow Tools analysis packages.
- Not snooping packets contents not collected by
Netflow, but does have source/destination
addresses and ports. So there are some
confidentiality issues, not anonymous yet. - Have done a prototype for people to look at.
Send email to noc_at_maxgigapop.net for url and
login/password if youre interested in testing. - Ideally, would like web interface where people
put in AS s - Then could look at flows to/from their
institutions - Could also look at protocol (traffic type), top
talkers, etc. - Interested in peoples ideas and feedback during
afternoon.
22FlowTools Graph Examples
23Peering Services
- There has been some interest in either
Internet2s Commercial Peering Service and/or
Cenics TransitRail. - Right now, we offer Cogent at 16/mb/m through
GWU and Qwest for 28/mb/m, soon to drop several
/mb/m based on Quilt contracts. Demand for
Cogent has been slow. - Have been thinking about mixing one or both
peering services in with the Cogent offering,
might enable us to drop price to around 10/mb/m,
depending on traffic mix. - Problem is, demand has to be enough to cover the
peering club and additional infrastructure
costs. We tried this long ago with direct Cogent
connection, not enough folks signed. - Would people be interested in this? Hope to hear
discussion and feedback in the afternoon
sessions.
24Closing thought the new Fujitsu dwdm system is
part of a sea change
- Not just needed to create a unified
infrastructure across MAX region and replacing
aging vendor hardware. - Also lays foundation for dynamic circuit and
lambda services activities that are happening in
the community - Want to get people thinking about implications of
dynamic allocation, dedicated rather than shared
resources - Circuit-like services for high-bandwidth
low-latency projects - Not replacement of regular ip routing, but in
addition - Possible campus strategies for fanout. Need to
plan for how will deliver, just as BERnet doing - facilitating researcher use of this as it comes
about. - People may say, We dont have any of those
applications on our campus yet. - But suddenly may have researchers with check in
hand - Eg, in planning phase now for DC ring, need to
forecast. - Talk to us about what youre doing and thinking!
25Thanks!
- magorian_at_maxgigapop.net