Introduction to NETS Marla Meehl NETS Manager SCD Network Engineering and Technology Section (NETS) December 8, 1998 - PowerPoint PPT Presentation

1 / 57
About This Presentation
Title:

Introduction to NETS Marla Meehl NETS Manager SCD Network Engineering and Technology Section (NETS) December 8, 1998

Description:

Introduction to NETS Marla Meehl NETS Manager SCD Network Engineering and Technology Section (NETS) December 8, 1998 Basic Contextual Information Role of NETS in UCAR ... – PowerPoint PPT presentation

Number of Views:1555
Avg rating:3.0/5.0
Slides: 58
Provided by: Basil3
Category:

less

Transcript and Presenter's Notes

Title: Introduction to NETS Marla Meehl NETS Manager SCD Network Engineering and Technology Section (NETS) December 8, 1998


1
Introduction to NETSMarla MeehlNETS
ManagerSCD Network Engineering and Technology
Section (NETS) December 8, 1998
2
Basic Contextual Information
3
Role of NETS in UCAR
  • NETS is responsible for almost all of UCAR
    networking
  • Historical evolution for SCD to manage all UCAR
    networking
  • Important for NETS to remain in SCD (periodic
    discussion of moving NETS to UCAR administrative
    domain)
  • http//www.scd.ucar.edu/nets/Introducing/organizat
    ionlocation.html)
  • NETS has additional SCD networking
    responsibilities
  • Discussed later
  • NETS advised by NCAB
  • NCAB Network Coordination and Advisory Board
  • Reports to SCD Director
  • Technical representatives from all parts of UCAR
  • Successful paradigm proposed by ITC to be
    replicated for other UCAR-wide functions to be
    managed in an NCAR Division

4
NETS Responsibilities
  • Types of networking supported for UCAR SCD
  • All LANs
  • All MANs
  • All WANs
  • Levels of networking supported for UCAR SCD
  • Layer1 All physical cabling plant for UCAR/SCD
  • Layer2 All logical networking - VLANs/ELANs,
    etc. for UCAR/SCD
  • Layer3 All routing (99.9 IP) for UCAR/SCD
  • Layer4 above support a little for UCAR a lot
    for SCD
  • More details later

5
What NETS Doesnt Do
  • NETS responsibility ends at the wallplate
  • wallplate means telecommunications outlet and
    is the point at which building infrastructure
    network leaf-node cabling terminates
  • Other Divisions are responsible past the
    wallplate
  • This mainly means they do the host-networking
    part
  • NETS does consult on configuration, performance,
    etc.
  • Private networking beyond the wall plates isnt
    forbidden
  • For SCD, NETS is involved with all aspects of
    networking
  • Supercomputer networking
  • Host-based networking routing, configuration,
    etc.
  • Special networking research projects
  • National Laboratory for Advanced Network Research
    (NLANR) Engineering
  • Hosting NLANR/CAIDA Web Cache Research Project

6
What NETS Doesnt Do (cont.)
  • NETS doesnt do DNS, email, security policy, etc.
  • NETS does implement security perimeters based on
    CSAC recommendations
  • NETS doesnt do MSS networking HiPPI, FC, etc.
  • These use non-IP channel-extension protocols
  • NETS doesnt do telephones and PBXs
  • NETS does install the telephone cabling
  • And we do inter-site tie-lines
  • NETS doesnt do first-level NOC/operations
  • Handled by Computer Room Operators
  • They determine which Network Engineer to call
  • We will visit network monitor station later

7
How Networking is Paid For
  • UCAR networking funding mechanisms
  • Space tax all UCAR programs (including SCD) pay
    for networking via an annual tax based upon
    square footage occupied by the program
  • Space tax pays for standard service as defined
    by NCAB
  • Includes all LAN, MAN, and WAN networking
    necessary for, and benefiting, UCAR as a whole
  • Includes all UCAR cabling and core networking to
    the wallplate
  • Includes 10-Mbps service to the office
  • Includes telephone wiring and inter-site
    telephone tie-lines
  • NETS charges back for anything beyond standard
    service
  • Host-connects greater than 10-Mbps
  • Rush jobs (less than 1-week advance notice)
  • Special networking (e.g., satellite hookups)
  • SCD networking funding mechanism
  • Line item in SCD budget

8
Magnitude of NETS Work
  • NETS supports 1,136 UCAR employees
  • Located in 9 buildings at 4 different sites
  • NETS supports 3,000 network-attached devices
  • NETS supports 114 IP subnetworks
  • 46 dialup lines (via 2 all-digital PRI T1 links)
  • 100 pieces of network-equipment
  • routers, switches, monitorable repeaters, etc.
  • Building cabling
  • 920 Standard wallplates installed
  • 1,360 wallplates to install by end of FY2000
  • NETS consults with 63 UCAR member universities
  • Involves 700 users of just SCD facilities, with
    345 projects involving 90 university facilities

9
Networking FunFacts
  • Total number of Ethernet switch ports available
    1950
  • Total number of Ethernet switch ports used 1750
  • Total number of feet of backbone cable 27,000
    feet
  • Total number of feet of wallplate cable
  • Fiber 17,000 feet
  • CAT5 240,000 feet
  • 10BaseT 230,000 feet
  • Telephone 300,000 feet
  • Total 787,000 feet

10
Resources Available to NETS
  • NETS budget (FY1999)
  • 2,341,100 UCAR funding to NETS
  • 261,769 SCD funding to NETS
  • Total NETS staff 15 people
  • Type of Staff
  • 8 Network Engineers
  • Perform design, operation, tuning,
    trouble-shooting, etc.
  • 4 Network Technicians
  • Mainly Layer1 (cabling) construction
  • 3 Administrative/Support Staff
  • Source of staff funding
  • 12 UCAR-funded staff
  • 2 SCD-funded staff
  • 1 staff funded by outside funding (NSF NLANR
    Program)

11
Overview of UCARLANs, MANs, and WANs
12
(No Transcript)
13
LANs
14
LAN Cabling
  • Standard wallplate to each workspace
  • Connects nearest telecommunications closet to
  • 4 Cat5 cables
  • 2 Fiber cable pairs (62.5 micron Multimode)
  • 2 Cat3 cables (mainly for telephone)
  • Only 40 of space meets this standard (920
    wallplates)
  • 1,360 new wallplates must to be installed by end
    of FY2000
  • Required to support Fast Ethernet (100BaseX)
  • 2,000,000 project (approved by UCAR management)
  • Closets connect to root closet with fiber bundles
  • ML root closet is in SCD machine room (ML 29)
  • FL root closet is in SCD machine room (FL2 3095)
  • Network equipment goes in closets (35 closets)

15
(No Transcript)
16
(No Transcript)
17
LAN Design Equipment
  • Backbone UCAR LAN network is largely ATM
  • OC-3 (155-Mbps) so far some OC-12 testing
  • Use ATM ELANs in the core one per VLAN
  • 3 Cisco ATM switches (model 1010)
  • Rest of network is mainly switched Ethernet
  • VLAN-based (one VLAN per IP-subnet)
  • 10BaseX and 100BaseX technology
  • 23 Cisco 5500 Ethernet packet switches
  • Routing
  • 4 Cisco 7507 routers
  • 1 Cisco 4700 router
  • 1 Cisco 2500 router
  • UCAR is essentially an all-Cisco shop

18
(No Transcript)
19
Important LAN Projects
  • FY1999
  • FUN Recabling Project (FL4 Uniform Network)
  • ATD, MIS, COMET Computer Room Recabling
  • FL1 South Atrium Recabling
  • Y2K engineering
  • FY2000
  • Year 2000 Recabling Project
  • 100BaseX standard service implementation/expansion
  • Y2K troubleshooting

20
MANs
21
Basic MAN Networking
  • Inter-site connectivity
  • ML-FL OC-3 (155-Mbps) ATM link
  • Also carries two virtual T1 voice tie-lines
  • 10 Mbps link to Jeffco site
  • T1 (1.5 Mbps) link to Marshall site
  • UCAR-owned fiber between all FL campus buildings
  • Home dial-up to NCAR
  • 2 PRI T1 lines (46 56Kbps/ISDN lines)
  • Cisco 5300 Remote Access Server
  • OC3 ATM atmospheric laser link to NOAA, Boulder
    (owned and operated by NOAA)

22
(No Transcript)
23
The BRAN MAN
24
BRAN
  • Boulder Research and Administration Network
  • Fiber for a healthy community
  • Consortium to build private fiber loop in Boulder
  • City of Boulder
  • University of Colorado-Boulder
  • National Institute of Standards and Technology
    (NIST) - Boulder
  • National Oceanic and Atmospheric Administration
    (NOAA) - Boulder
  • NCAR/UCAR
  • Connects partners facilities US West ICG
    POPs
  • Includes ML-FL link of 20 fiber pairs
  • Construction estimated at 350,000/partner
  • Essentially provides unlimited free bandwidth
  • Bypasses US West
  • Provides competition between US West ICG

25
(No Transcript)
26
WANs
27
UCAR WAN Connections
  • Commodity Internet Connection
  • DS-3 (45-Mbps) Cable and Wireless service
  • Cost-shared with local gigapop partners (more
    later)
  • Steady 50 utilization 85 peaks (5 min
    averages)
  • OC-3 (155-Mbps) connection to NSFs vBNS
  • Planned OC-3 connection to UCAIDs Abilene
    Internet2 network
  • All UCAR WAN connections part of the Front Range
    GigaPop (FRGP) operated by NETS (details later)

28
NSFs vBNSvery-high-speed Backbone Network
Service
29
vBNS History
  • vBNS goals
  • jumpstart use of high-performance networking for
    advanced research while advancing research itself
    with high-performance networking
  • supplement Commodity Internet which has been
    inadequate for universities since NSFnet was
    decommissioned
  • vBNS started about 3 years ago with the 5 NSF
    supercomputing centers
  • vBNS started adding universities about 2 years
    ago
  • NSF funding for vBNS ends March 2000

30
vBNS The Network
  • Operated by MCI/Worldcom
  • ATM based network using mainly IP
  • OC-12 (622-Mbps) backbone
  • OC-12 (622-Mbps), OC-3 (155-Mbps) DS-3
    (45-Mbps) to institutions
  • 73 institutions currently connected
  • 131 institutions approved for connection to vBNS

31
vBNS and NCAR
  • NCAR was an original vBNS node
  • 43 of 63 UCAR member-universities are approved
    for vBNS (at last check on 8/1998)
  • 28 UCAR members currently connected
  • Major benefit for UCAR and its members
  • greatly superior to the Commodity Internet
  • example more UNIDATA data possible
  • example terabyte data transfers possible

32
(No Transcript)
33
UCAIDs Abilene Internet2 Network
34
Abilene History
  • First called the Internet2 Project
  • Then non-profit UCAID (University Corporation for
    Advanced Internet Development) was founded
  • UCAID is patterned after the UCAR model
  • UCAID currently has 130 members (mostly
    universities)
  • Abilene is the name of UCAIDs first network
  • Note Internet2 used to refer to
  • the Internet organization, which is now called
    UCAID
  • the actual network, which is now named Abilene
  • the concept for a future network, soon to be
    reality in the form of Abilene

35
Abilene Goals
  • Goals jumpstart use of high-performance
    networking for advanced research while advancing
    research itself with high-performance networking
    (same as vBNS)
  • To be operated and managed by the members
    themselves (similar to the UCAR model)
  • Provide an alternative when NSF support of the
    vBNS terminates on March 2000

36
Abilene The Basic Network
  • Uses Qwest OC48 (2.4Gbps) fiber optic backbone
  • grow to OC192 (9.6Gbps) fiber optic backbone
  • Qwest to donate .5 billion worth of fiber leases
    over 5 years
  • Hardware provided by Cisco Systems and Nortel
    (Northern Telecom)
  • Internet Protocol (IP) over SONET
  • no ATM layer
  • Uses 10 core router nodes at Qwest POPs
  • Denver is one of these

37
Abilene Status
  • Abilene soon to be designated by NSF as an
    NSF-approved High-Performance Network (HPN)
  • puts Abilene on an equal basis with vBNS
  • Abilene reached peering agreement with vBNS so
    NSF HPC (High Performance Connection) schools
    have equal access to each other regardless of
    vBNS or Abilene connection
  • UCAID expects Abilene to come online 1/1999
  • UCAID expects 50 universities online on 1/1999
  • UCAID expects 13 gigapops online on 1/1999
  • Abilene beta network now includes a half-dozen
    universities
  • plus exchanging routes with vBNS

38
Abilene and NCAR
  • 48 of 63 UCAR member-universities are UCAID
    members (at last check on 8/1998)
  • NSF funding of vBNS terminates March 2000
  • Same benefit for UCAR and its members as vBNS
  • greatly superior to the Commodity Internet
  • example more UNIDATA data possible
  • example terabyte data transfers possible

39
(No Transcript)
40
(No Transcript)
41
The GigaPop Concept
42
What Is A GigaPop?
  • Multiple sites agree to aggregate to a central
    location and share high-speed access from there,
    instead of each maintaining direct links to
    multiple networks
  • Share costs through sharing infrastructure
  • Share Commodity Internet expenses
  • Essentially statistical multiplexing of expensive
    high-speed resources
  • at any given time much more bandwidth is
    available to each institution than each could
    afford without sharing
  • Share engineering and management expertise
  • More clout with vendors

43
(No Transcript)
44
Front Range GigaPop (FRGP)
45
FRGP Current NCAR Services
  • vBNS access
  • Shared Commodity Internet access
  • Intra-Gigapop access
  • Web cache hosting
  • 24 x 365 NOC (Network Operation Center)
  • Engineering and management

46
(No Transcript)
47
FRGPAbilene What Should NCAR Do?
  • Why should NCAR connect to Abilene?
  • fate of vBNS is unknown after March 2000
  • 48 of 63 UCAR members are also Internet2 members
  • Why should NCAR join a joint FRGP/Abilene effort?
  • combined FRGP/Abilene effort saves NCAR money
  • provides excellent intra-gigapop connectivity
  • provides greater depth and redundancy of
    commodity internet access

48
FRGP Why NCAR as GP Operator?
  • NCAR already has considerable gigapop operational
    experience
  • NCAR is already serving the FRGP members
  • Abilene connection is an incremental addition to
    existing gigapop
  • doesnt require a completely new effort from
    scratch
  • NCAR already has a 24 x 365 NOC
  • NCAR has an existing networking staff to team
    with the new FRGP engineer
  • NCAR is university-neutral

49
FRGP Membership Types
  • Full members
  • both Commodity Internet Abilene and/or vBNS
    access
  • Commodity-only members
  • just Commodity Internet access

50
FRGP Full Members
  • University of Colorado - Boulder
  • Colorado State University
  • University of Colorado - Denver
  • NCAR/UCAR
  • University of Wyoming

51
FRGP Commodity-only Members
  • Colorado School of Mines
  • Denver University
  • University of Northern Colorado

52
FRGP Possible Future Members
  • U of C System
  • NOAA/Boulder
  • NIST/Boulder
  • NASA/Boulder

53
FRGP But!!!
  • This is far from a done deal at this time!
  • Members still have funding issues
  • No agreements have yet been decided
  • Latest developments
  • Qwest asked to bid on FRGP, but bid was
    unacceptably expensive

54
(No Transcript)
55
FRGP Why Add a Denver Gigapoint?
  • Much cheaper for most members to backhaul to
    Denver instead of to existing NCAR gigapoint
  • U of Wyoming, Colorado State, UofC Denver
  • UofC Denver has computer room space thats two
    blocks from Denvers telco hotel.
  • But also dont want to re-engineer NCAR
    gigapoint
  • wanted to preserve vBNS backhaul to NCAR
  • wanted to preserve MCI Commodity Internet
    backhaul to NCAR
  • wanted to minimize changes to the existing
    gigapoint
  • Incremental addition of Denver gigapoint is most
    cost-effective engineering option

56
FRGP Abilene Implications for NCAR
  • New annual expenses for NCAR
  • New one-time share of startup costs
  • NCAR employs manages new FRGP engineer
  • NCAR manages additional network equipment
  • including new off-site equipment in Denver
  • Increased engineering responsibilities for NCAR
  • Increased administrative/accounting
    responsibilities for NCAR

57
Useful URLs
  • http//www.scd.ucar.edu
  • http//www.scd.ucar.edu/nets
  • http//www.ucar.edu/ucargen/groups/ncab/
  • http//www.vbns.net
  • http//www.ucaid.net
Write a Comment
User Comments (0)
About PowerShow.com