National Networking Update Basil Irwin Senior Network Engineer National Center for Atmospheric Research SCD Network Engineering and Technology Section (NETS) January 27, 1999 - PowerPoint PPT Presentation

1 / 65
About This Presentation
Title:

National Networking Update Basil Irwin Senior Network Engineer National Center for Atmospheric Research SCD Network Engineering and Technology Section (NETS) January 27, 1999

Description:

National Networking Update Basil Irwin Senior Network Engineer National Center for Atmospheric Research SCD Network Engineering and Technology Section (NETS) – PowerPoint PPT presentation

Number of Views:504
Avg rating:3.0/5.0
Slides: 66
Provided by: Basil3
Category:

less

Transcript and Presenter's Notes

Title: National Networking Update Basil Irwin Senior Network Engineer National Center for Atmospheric Research SCD Network Engineering and Technology Section (NETS) January 27, 1999


1
National Networking UpdateBasil IrwinSenior
Network EngineerNational Center for Atmospheric
ResearchSCD Network Engineering and Technology
Section (NETS) January 27, 1999
2
Summary of Topics
  • Review status of Next Generation Internet (NGI)
    Initiative
  • Review status of NSFs vBNS Network
  • Review status of UCAIDs Abilene Network
    (Internet2)
  • Review the Gigapop concept
  • Review Front Range GigaPop (FRGP) status

3
NGI Initiative Review
4
What Is the NGI Initiative?
  • The NGI Initiative is a plan to implement the
    President's Oct 10, 1996 statement in Knoxville,
    TN of his "commitment to a new 100 million
    initiative, for the first year, to improve and
    expand the Internet . . .
  • The Next Generation Internet (NGI) IS NOT a
    network
  • Its a Presidential funding initiative
  • The next step in Federal funding for seeding the
    evolving US networking infrastructure
  • Goal was to provide 100 million annual
    initiative of money for 3 years
  • Plan is at www.ccic.gov/ngi/concept-Jul97

5
(No Transcript)
6
How Will The NGI Initiative Work
  • Federal research agencies with existing
    mission-oriented networks to take the lead
  • Built on Federal/private partnerships
  • Between advanced technology researchers and
    advanced application researchers
  • Between federally-funded network testbeds and
    commercial network service and equipment
    providers
  • Requires substantial private-sector matching
    funds.
  • Two to one ratio of private to Federal funds

7
(No Transcript)
8
FY1998 Funding
  • 85 million
  • 42 DARPA
  • 23 NSF
  • 10 NASA
  • 5 NIST
  • 5 National Library of Medicine/National
    Institute of Health
  • DOE to be added in FY1999
  • 109 million proposed for FY1999

9
Three NGI Initiative Goals
  • Main NGI goal is to advance three networking
    areas
  • Goal 1 Advanced network technologies (e.g.,
    protocols to transfer data that is being browsed)
  • Goal 2 Advanced network infrastructure (e.g.,
    wires and boxes that transmit the browsed data)
  • Goal 3 Revolutionary applications (e.g., Web
    browsers)

10
Goal 1 Advanced Network Technologies
  • Advanced technologies
  • Quality of service (QOS)
  • Security and robustness
  • Net management, including bandwidth allocation
    and sharing
  • System engineering tools, metrics, statistics,
    and analysis
  • New or modified protocols routing, switching,
    multicast, security, etc.
  • Collaborative and distributed application
    environment support
  • Operating system improvements to support advanced
    services
  • Achieved by funding university, Federal, and
    industry RD to develop and deploy advanced
    services
  • Done in open environment, utilizing IETF, ATM
    Forum, etc.

11
Goal 2 Network Infrastructure
  • Subgoal 1 Develop demo net fabric that delivers
    100 Mbps end-to-end to 100 interconnected sites
  • Accomplished by collaboration of Federal research
    institutions, telecommunications providers, and
    Internet providers
  • Interconnect and expand vBNS (NFS), ESnet (DOE),
    NREN (NASA), DREN (DOD), and others (such as the
    Internet2/Abiliene)
  • Funds universities, industry RD, and Federal
    research institutions
  • Subgoal 1 fabric generally expected to be highly
    reliable

12
Goal 2 Network Infrastructure
  • Subgoal 2 Develop demo net fabric that delivers
    1000 Mbps end-to-end to about 10 interconnected
    sites
  • May be separate fabric with links to the Subgoal
    1 fabric, and/or may include upgraded parts of
    the Subgoal 1 fabric
  • Would involve very early technology
    implementations and wouldn't likely be as
    reliable as Subgoal 1 fabric
  • Federal agencies would take the lead
  • Commercialize advances ASAP
  • Utilize IETF, ATM Forum et. al. to foster freely
    available commercial standards

13
Goal 3 Revolutionary Applications
  • (Note Real revolutionary applications are never
    found in a government-generated list)
  • Some possible "revolutionary" applications
  • Health care telemedicine
  • Education distance ed digital libraries
  • Scientific research energy, earth systems,
    climate, biomed research
  • National Security high-performance global data
    comm
  • Environment monitoring, prediction, warning,
    response
  • Government better delivery of services
  • Emergencies disaster response, crisis management
  • Design and Manufacture manufacturing engineering
  • "NGI will not provide funding support for
    applications per se" will fund addition of
    networking to existing apps.

14
NGI Initiative Expectations
  • Fund 100 high-performance connections to
    research universities and Federal research
    institutions
  • 100 science applications will use the new
    connections
  • 10 improved Federal information services
  • 30 government-industry-academia RD partnerships
  • NGI program funding leveraged by two-to-one by
    these partnerships

15
NGI Initiative Proposed Mangement
  • NGI Implementation Team
  • Under LSN Working Group
  • One member from each directly funded agency
  • (Not clear to me what say-so this Team has over
    expenditures)

16
JET Joint Engineering Team
17
JET
  • Affinity group of
  • NSF (vBNS)
  • NASA (NREN/NISN)
  • DARPA (DREN)
  • DOE (ESnet)
  • UCAID/Internet2 (Abilene)
  • Group that is engineering the NGIXes

18
NGIXes Next Generation Internet Exchanges
19
NGIXes
  • NAPs for Federal lab neworks to interconnect
  • Layer 2
  • ATM-based
  • Minimum connection-speed is OC-3
  • Replace FIXes (really FIX-W, FIX-E already gone)
  • Three of em
  • West coast NASA-Ames
  • East coast NASA-Goddard
  • Mid contintent Chicago (at MREN/STARTAP)

20
(No Transcript)
21
Final Thoughts
  • The NGI isn't a network it's the improved
    network infrastructure that presumably results
    from the NGI Initiative
  • The NSFs vBNS does benefit from NGI funding.
  • The Internet2/Abilene is an activity independent
    from the NGI Initiative, and does not really
    benefit from NGI funding

22
vBNS Review
23
vBNS History
  • vBNS goals
  • jumpstart use of high-performance networking for
    advanced research while advancing research itself
    with high-performance networking
  • supplement Commodity Internet which has been
    inadequate for universities since NSFnet was
    decommissioned
  • vBNS started about 3 years ago with the NSF
    supercomputing centers
  • vBNS started adding universities about 2 years
    ago
  • Currently 77 institutions connected to vBNS
  • 21 more in progress
  • 131 institutions approved for connection to vBNS
  • NSF funding for vBNS ends March 2000

24
vBNS The Network
  • Operated by MCI
  • ATM based network using mainly IP
  • OC-12 (622-Mbps) backbone
  • OC-3 (155-Mbps) DS-3 (45-Mbps) to institutions
  • 77 institutions currently connected
  • 21 more in progress

25
(No Transcript)
26
(No Transcript)
27
(No Transcript)
28
(No Transcript)
29
(No Transcript)
30
STAR TAP
31
STAR TAP
  • Science, Technology And Research Transit Access
    Point
  • NSF-designated NAP for attachment of
    international networks to the vBNS
  • Colocated with MREN, NSF/Ameritech NAP, and mid
    continent NGIX
  • Is really just a single large ATM switch

32
(No Transcript)
33
vBNS and NCAR
  • NCAR was an original vBNS node
  • 40 of 63 UCAR member-universities are approved
    for vBNS (at last check on 8/1998)
  • Major benefit for UCAR and its members
  • greatly superior to the Commodity Internet
  • example more UNIDATA data possible
  • example terabyte data transfers possible

34
Abilene Review
35
Abilene History
  • First called the Internet2 Project
  • Then non-profit UCAID (University Corporation for
    Advanced Internet Development) was founded
  • UCAID is patterned after the UCAR model
  • UCAID currently has 130 members (mostly
    universities)
  • Abilene is the name of UCAIDs first network
  • Note Internet2 used to refer to
  • the Internet organization, which is now called
    UCAID
  • the actual network, which is now named Abilene
  • the concept for a future network, soon to be
    reality in the form of Abilene

36
Abilene Goals
  • Goals jumpstart use of high-performance
    networking for advanced research while advancing
    research itself with high-performance networking
    (same as vBNS)
  • But to be operated and managed by the members
    themselves, like the UCAR model
  • Provide an alternative when NSF support of the
    vBNS terminates on March 2000

37
Abilene The Basic Network
  • Uses Qwest OC48 (2.4Gbps) fiber optic backbone
  • grow to OC192 (9.6Gbps) fiber optic backbone
  • Qwest to donate .5 billion worth of fiber leases
    over 5 years
  • Hardware provided by Cisco Systems and Nortel
    (Northern Telecom)
  • Internet Protocol (IP) over SONET
  • no ATM layer
  • Uses 10 core router nodes at Qwest POPs
  • Denver is one of these

38
(No Transcript)
39
(No Transcript)
40
(No Transcript)
41
Abilene Status
  • Abilene soon to be designated by NSF as an
    NSF-approved High-Performance Network (HPN)
  • puts Abilene on an equal basis with vBNS
  • Abilene reached peering agreement with vBNS so
    NSF HPC (High Performance Connection) schools
    have equal access to each other regardless of
    vBNS or Abilene connection
  • UCAID expects Abilene to come online 2/1999
  • UCAID expects 50 universities online on 2/1999
  • UCAID expects 13 gigapops online on 2/1999
  • Abilene beta network now includes a half-dozen
    universities
  • plus exchanging routes with vBNS

42
(No Transcript)
43
(No Transcript)
44
Abilene and NCAR
  • 48 of 63 UCAR member-universities are UCAID
    members (at last check on 8/1998)
  • NSF funding of vBNS terminates March 2000
  • Same benefit for UCAR and its members as vBNS
  • greatly superior to the Commodity Internet
  • example more UNIDATA data possible
  • example terabyte data transfers possible

45
The GigaPop Concept
46
(No Transcript)
47
GigaPops What Good Are They?
  • Share costs through sharing infrastructure
  • Aggregate to a central location and share
    high-speed access from there
  • Share Commodity Internet expenses
  • Essentially statistical multiplexing of expensive
    high-speed resources
  • at any given time much more bandwidth is
    available to each institution than each could
    afford without sharing
  • Share engineering and management expertise
  • More clout with vendors

48
Front Range GigaPop (FRGP)
49
FRGP Current NCAR GigaPop Services
  • vBNS access
  • Shared Commodity Internet access
  • Intra-Gigapop access
  • Web cache hosting
  • 24 x 365 NOC (Network Operation Center)
  • Engineering and management

50
(No Transcript)
51
FRGPAbilene What Should NCAR Do?
  • Why should NCAR connect to Abilene?
  • Abilene gives NCAR additional connectivity to
    most of its member institutions
  • fate of vBNS is unknown after March 2000
  • 48 of 63 UCAR members are also Internet2 members
  • Why should NCAR join a joint FRGP/Abilene effort?
  • combined FRGP/Abilene effort saves NCAR money

52
FRGP Why NCAR as GP Operator?
  • NCAR already has considerable gigapop operational
    experience
  • NCAR is already serving the FRGP members
  • Abilene connection is an incremental addition to
    existing gigapop
  • doesnt require a completely new effort from
    scratch
  • NCAR already has a 24 x 365 NOC
  • at no extra charge
  • NCAR has an existing networking staff to team
    with the new FRGP engineer
  • at no extra cost
  • NCAR is university-neutral

53
FRGP Membership Types
  • Full members
  • both Commodity Internet Abilene access
  • Commodity-only members
  • just Commodity Internet access

54
FRGP Full Members
  • University of Colorado - Boulder
  • Colorado State University
  • University of Colorado - Denver
  • University Corporation for Atmospheric Research
  • University of Wyoming

55
FRGP Commodity-only Members
  • Colorado School of Mines
  • Denver University
  • University of Northern Colorado

56
FRGP Possible Future Members
  • U of C System
  • NOAA/Boulder
  • NIST/Boulder
  • NASA/Boulder

57
FRGP But!!!
  • This is far from a done deal at this time!
  • Members still have funding issues
  • No agreements have yet been decided
  • Etc.

58
(No Transcript)
59
FRGP Why a Denver Gigapoint?
  • Much cheaper for most members to backhaul to
    Denver instead of to existing NCAR gigapoint
  • U of Wyoming, Colorado State, UofC Denver
  • UofC Denver has computer room space thats two
    blocks from Denvers telco hotel.
  • But also dont want to re-engineer NCAR
    gigapoint
  • wanted to preserve vBNS backhaul to NCAR
  • wanted to preserve MCI Commodity Internet
    backhaul to NCAR
  • wanted to minimize changes to the existing
    gigapoint
  • Incremental addition of Denver gigapoint is most
    cost-effective engineering option

60
FRGP Routing Engineering
  • Must deal with so-called policy-based routing
  • that is, IP forwarding based on packet
    source-IP-address
  • example some schools can use Abilene and some
    cant
  • Without high-speed source-IP-address routing,
    requires one forwarding-table (router) per policy
  • FRGP has three identified policies at this time,
    for
  • Commodity Internet only institutions
  • Commodity Internet Abilene institutions
  • Commodity Internet Abilene vBNS institutions
  • Use ATM and PVCs to construct the router topology
    to implement these policies
  • Note distributed gigapoints require care to site
    routers optimally

61
(No Transcript)
62
FRGP AbileneCommodity Budget
  • 851,000 total annual recurring costs
  • 133,000 per Full Member (5)
  • 62,000 per Commodity-only Member (3)
  • 150,000 one-time Abilene equipment costs
  • This includes the following costs
  • existing FRGP costs
  • existing Commodity Internet access costs
  • new Abilene costs
  • Does not include vBNS costs, campus-backhaul
    costs, or other local campus costs
  • (Reduced per-member costs if more join)

63
FRGP Annual Expenses
  • Abilene Commodity expenses
  • Generic UCAID/Abilene costs
  • 25,000 UCAID annual dues (per full FRGP member
    5)
  • 20,000 per-institution Abilene fee (per full
    FRGP member 5)
  • 110,000 per-gigapop Abilene fee (shared by 5)
  • 12,000 per-gigapop Qwest/Abilene port fee
    (shared by 5)
  • Costs specific to FRGP
  • 56,000 shared Boulder-Denver OC-3 link (shared
    by 5)
  • 27,000 shared Denver-Qwest OC-3 link (shared by
    5)
  • 281,000 shared Commodity access fees (shared by
    8)
  • 140,000 other operational costs (shared by 8)
  • engineers salary
  • hardware Maintenance
  • travel

64
FRGP Abilene Implications for NCAR
  • New annual expenses of about 110,000 for NCAR
  • Plus NCARs 50,00 share of startup costs
  • NCAR employs manages new FRGP engineer
  • NCAR manages additional network equipment
  • including new off-site equipment in Denver
  • Increased engineering responsibilities for NCAR
  • Increased administrative/accounting
    responsibilities for NCAR

65
Summary of URLs
  • www.ngi.gov
  • www.ccic.gov/ngi/concept-Jul97
  • www.vbns.net
  • www.vbns.net/presentations/workshop/vbns_tutorial/
    index.htm
  • www.startap.net
  • www.internet2.edu
  • www.ccic.gov/jet
  • pointer to NAPs, major Federal networks, etc.
Write a Comment
User Comments (0)
About PowerShow.com