NSF Activities in Shared Cyberinfrastructure - PowerPoint PPT Presentation

1 / 39
About This Presentation
Title:

NSF Activities in Shared Cyberinfrastructure

Description:

PI Peter Honeyman, Michigan, 3 years. 35. Other NMI ... PI Peter Fox, UCAR, 3 years. SEIII Program award 'Dependable Grids' PI Andrew Grimshaw, UVA ... – PowerPoint PPT presentation

Number of Views:56
Avg rating:3.0/5.0
Slides: 40
Provided by: pfreemane
Category:

less

Transcript and Presenter's Notes

Title: NSF Activities in Shared Cyberinfrastructure


1
NSF Activities in Shared Cyberinfrastructure
  • Kevin Thompson
  • Program Director Division of Shared
    Cyberinfrastructure
  • Directorate for Computer and Information Science
  • and Engineering
  • National Science Foundation
  • November 24, 2004

2
Outline
  • SCI Guiding Principles
  • SCI Programs
  • Looking to the Future

3
NSB-02-190 on SE infrastructure (2002) also the
more specific charge to the blue ribbon panel on
Cyberinfrastructure (2003)
NSF Programsimplement NSB Policy!
4
Research InfrastructureNSB-02-190
  • Hardware (tools,equipment,instrumentation,
    platforms, facilities)
  • Software (enabling computer systems, libraries,
    databases, data analysis interpretation
    systems, comm. networks)
  • Technical support (human or automated, services
    to operate and maintain infrastructure)
  • Special environments installations - to
    effectively create, deploy, access, use research
    tools (building space)

5
Unmet NEEDSper NSB-02-190
  • Lack of long-term stable support for wetware
    archives prevents rapid advances in post-genomic
    discoveries (Bio)
  • Lack of a large-scale network infrastructure for
    next generation of secure network protocols
    architectures could be developed tested
    (CISE)
  • Lack of support for new international social
    science surveys (SBE)
  • Lack of synchrotron radiation facilities with
    orders-of-magnitude increase in luminosity (MPS)

6
Further Drill Down onNSB-02-190
  • NSB-02-190 stresses the importance and impact of
    Cyberinfrastructure, but treats it succinctly
    the NSB linked their 2002 report to the more
    extensive (2003) discussion on CI in the Atkins
    Report meaning
  • Focus on implementing recommendations in these
    reports, vs. creating new policy (not our role)

7
High Visibility
  • The nations IT capability is like adrenaline to
    SE as well as to the overall economy
  • Recognized, periodic infusion of large investment
    (e.g. ITR via PITAC)
  • Next step build the most advanced research
    cyberinfrastructure while simultaneously
    broadening its accessibility feedback (on their
    expectations) from the broad SE community

8
Guiding Principles for SCI at NSF
  • Serve all of science engineering
  • Firm and continuing commitment to providing the
    most advanced cyberinfrastructure (CI), with
    high-end computing (HEC) at the core
  • Encourage emerging CI while maintaining and
    transitioning extant CI
  • Provide balance in CI equipment
  • Strong links to ongoing fundamental research to
    create future generations of CI

9
Unique Features of SCI
  • A core service mission that most divisions at
    NSF do not have
  • Frequent interactions with the entire spectrum of
    hierarchy at NSF, including the Office of the
    Director, the NSB, directorate leadership and
    program officers across all SE disciplines
  • High visibility of SCIs activities to Congress
    and the broad (international) SE community

10
The Culture of Deployment
  • Within the dominant NSF culture of discovery
    research at the frontiers SCI must create and
    sustain a subculture of timely delivery
    (deployment) of enabling infrastructure, a
    challenge of subtle complexity because
  • The teams building this infrastructure consist of
    (e.g. are lead by) SE researchers working in the
    spirit of public service to their peer community
    (an axiom based on historical patterns of SE
    research as the driver for new cyber
    technologies)
  • Need to create a complete team around the SE
    leadership (e.g. sustained support for research
    staff on deployment scale projects)

11
History of NSF CI Investments
12
Integrated CI System meeting the
needs of a community of communities
  • Applications
  • Environmental Science
  • High Energy Physics
  • Proteomics/Genomics

DevelopmentTools Libraries
Education and Training
Discovery Innovation
Grid Services Middleware
Hardware
13
SCI Active Programs
  • Centers (SDSC, NCSA, PSC)
  • TeraGrid/TCS/DTF/ETF
  • IRNC
  • NMI
  • Other initiatives
  • Other programs with no current solicitations (but
    with active award activities)

14
Extensible Terascale Facility 2003
15
(No Transcript)
16
TeraGrid Partners
17
International Research Network Connections (IRNC)
Program
  • IRNC Solicitation - a small number of awards to
    provide network connections linking U.S. research
    networks with peer networks in other parts of the
    world
  • Follow-on to HPIIS program
  • 5M total per year over 5 years
  • Production-level state-of-the-art network
    services
  • Activities will include strong monitoring/measurem
    ent across all funded connections
  • Awards
  • Not yet official! (left blank until December)

18
Other SCI-related Programs
  • Cyber Trust (CI)
  • Open now (2/5/05)
  • goal to improve national cyber security
  • Science and Engineering Information Integration
    and Informatics (SEIII)
  • Open now (12/15/04)
  • info mgmt and data analysis applied to specific
    science domains
  • Special emphasis on domain-specific and
    general-purpose tools for integrating info from
    disparate source
  • Science of Design
  • focuses on the scientific study of the design of
    software-intensive systems that perform
    computing, communications and information
    processing

19
Historical Programs in SCI
  • Strategic Technologies for the Internet (STI)
  • Experimental Infrastructure Networks (EIN)
  • Complimentary program to NRT
  • Characteristics
  • Control/management of the infrastructure
    end-to-end
  • End-to-end performance/support w/ dedicated
    provisioning
  • Pre-market technologies, experimental h/w, alpha
    software
  • Significant collaboration vertically and across
    sites
  • Persistence, w/ repeatable network experiments
    and/or reconfigurability
  • Experimental protocols, configurations, and
    approaches for high throughput, low latency,
    large bursts
  • 10M spent in 2003 funds

20
Active EIN Activities
  • Collaborative Research Dynamic Resource
    Allocation for GMPLS Optical Networks (DRAGON)
  • PIs Jerry Sobieski UMd), Herbert Schorr (Tom
    Lehman - ISI East), and Bijan Jabbari (GMU)
  • Collaborative Research End-to-End Provisioned
    Optical Network Tested for Large-Scale eScience
    Applications (CHEETAH)
  • PIs Malathi Veeraraghavan (UVA), John Blondin
    (NC St.), and Nageswara Rao (ORNL)
  • Several other awards not directly relevant to
    optical networking

21
CHEETAH Circuit-switched High-speed End-to-End
Transport ArcHitecture
  • Create a network offering end-to-end BW-on-demand
    service
  • Make it a parallel network to existing high-speed
    IP networks
  • Combination of Ethernet and SONET extend to MPLS
    and VPLS as part of a DOE project.
  • Transport protocols
  • Fixed-Rate Transport Protocol (FRTP)
  • Based on SABUL code modified for dedicated
    circuits
  • Terascale Supernova Initiative (TSI) Applications
  • High-throughput file transfers
  • Interactive apps like remote viz
  • Remote computational steering
  • Team University of Virginia, Oakridge Natl.
    Labs, North Carolina State Univ., City Univ. of
    NY

22
Cheetah network
23
DRAGON Overview
  • Experimental deployment of advanced optical
    network in Washington D.C. metro area
  • Research (and development) of advanced network
    services in support of eScience applications
  • Real-time inter-domain provisioning of
    deterministic paths in response to application
    requests
  • Associated security, AAA, scheduling capabilities
  • Automatic generation of Application Specific
    Topologies based on above capabilities
  • Collaboration with specific eScience applications
    to enhance domain science
  • eVLBI (electronic Very Long Baseline
    Interferometry)
  • High Definition (HD) format collaboration and
    remote visualization
  • See dragon.east.isi.edu

24
DRAGON Status
  • Network Research Status
  • Inter and Intra-Domain provisioning development
    and testing underway
  • Working on architectures for security, AAA, and
    scheduling capabilities
  • Application Research Status
  • eVLBI real-time correlation testing underway.
    Working on 1 Gbit/s real-time correlation between
    MIT Haystack Correlator, Westford Antenna, and
    GGAO Antenna.
  • Testing underway for provisioning of low latency
    paths for native and uncompressed HD formatted
    data streams
  • Demos of both at SC04
  • Network Deployment Status
  • Year 1 network deployment complete, Year 2
    deployment underway
  • Current Sites NASA GSFC, UMD/MAX, GWU, USC/ISI
    East, Eckington, MIT Haystack Observatory
  • Sites anticipated for Yr2 USNO, NCSA ACCESS
  • Anticipated Yr2 Optical Network Peerings NLR,
    HOPI, DOD GIGEF (formerly ATDNet-BOSSNet)
  • Other Possible Optical Network Peerings DOE
    UltraScience Net, CHEETAH, OMNINet

25
Other SCI Activities
  • 2004 ITR Awards
  • HPWREN (PI Braun, SDSC) - see hpwren.ucsd.edu
  • Improving the Integrity of Domain Name System
    (DNS) Monitoring and Protection (PI Kim Claffy,
    SDSC) - see http//www.caida.org/projects/dnsitr/
  • ITR large projects (started prior to 2004)
  • Optiputer
  • iVDGL/GriPhyN
  • GEON, LEAD, others
  • Other CI-heavy Programs
  • NEES

26
NSF Middleware Initiative (NMI)
  • Purpose - To design, develop, deploy and support
    a set of reusable, expandable set of middleware
    functions and services that benefit applications
    in a networked environment
  • Program encourages open source software
    development and development of middleware
    standards
  • Release 6 coming out in December
  • NMI fills an important need for software and
    services that address the complexity of achieving
    interoperability among heterogeneous platforms
    and environments, to let end users concentrate on
    developing productive applications rather than
    having to reinvent core middleware for each
    project.

27
NMI System Integrators
  • Disseminating and Supporting Middleware
    Infrastructure Engaging and Expanding Scientific
    Grid Communities
  • PI Randy Butler (NCSA)
  • Designing and Building a National Middleware
    Infrastructure (NMI-2)
  • PI Carl Kesselman (USC/ISI)
  • An Integrative Testing Framework for Grid
    Middleware and Grid Environments
  • PI Miron Livny (U Wisc)
  • Extending Integrated Middleware to Collaborative
    Environments in Research and Education
  • PI Klingenstein (UCAID)
  • Instruments and Sensors as Network Services
    Instruments as First Class Members of the Grid
  • PI Rick McMullen (Indiana)
  • Collaborative Proposal Middleware for Grid
    Portal Development
  • Pis Pierce (Indiana - Lead), Alameda, Severance,
    Thomas, and von Laszewski

28
NMI 2004 Solicitation
  • Due May 14, 2004
  • Two Areas
  • Integration and Deployment of middleware with
    domain se apps to create production environments
    (2-3 years,
  • Development and prototyping of new functions and
    services (2-3 years, 50k-500k/year)
  • Anticipated Funding 7-14 awards

29
2004 NMI Awards
  • Over 140 proposals
  • 13 awards made
  • 2 Deployment awards totaling 6M
  • 11 Development awards totaling 6M
  • 3 awards in Small Grants for Exploratory Research
    (SGERs)
  • Other Awards of note
  • See www.nsf-middleware.org

30
NMI Deployment Awards
  • nanoHub (PI Sebastien Goasguen, Purdue)
  • www.nanohub.org
  • Deploys middleware to extend/enable NSF Network
    for Computational Nanotechnology (NCN)
  • Condor-G , Teragrid, UWisc grid assets used
  • Apps include NEMO and nanoMOS
  • 3 year timeframe

31
NMI Deployment Awards
  • GridChem (PI John Connolly, Kentucky)
  • Builds shared grid for computational chemistry
    with additional apps for genomics and proteomics
  • Condor-G , GT3, uberFTP, GridPort, NWS used
  • App codes Gaussian, GAMESS, Molpro, NWChem,
    ACES2, Q-Chem, Crystal, NBO, Wien2K, MCCCS,
    Towhee
  • Partners UKy, Texas, OSC, LSU, NCSA
  • 3 year timeframe

32
NMI Development Awards
  • Scalable Distributed Information Management
    System
  • PI Michael Dahlin, Texas, 3 years
  • Policy Contolled Attribute Framework
  • Collaborative, 2 years
  • PI Katarzyna Keahey, ANL
  • PI Von Welch, UIUC
  • Performance Inside Performance Monitoring and
    Diagnosis for NMI Software and Applications
  • PI Jennifer Schopf, ANL, 2 years

33
NMI Development Awards
  • Middleware for Adaptive Robust Collaborations
    across Heterogeneous Environments and Systems
  • PI Liang Cheng, Lehigh, 3 years
  • NMI Development for Public-Resource Computing
  • PI David Anderson, UC Berkeley, 2 years
  • Mobile-Agent-Based Middleware for Distributed
    Job Coordination
  • PI Munehiro Fukuda, U Washington, 3 years
  • Energy-Conserving Middleware with Off-Network
    Control Processing in Embedded Sensor Networks
  • PI Subir Biswas, MSU, 3 years

34
NMI Development Awards
  • Design, Development, and Deployment of the Web
    Services Resource Framework on the .NET Framework
  • PI Marty Humphrey, Virginia, 2 years
  • The Computational Chemistry Prototyping
    Environment
  • PI Kim Baldridge, SDSC, 2 years
  • GridNFS
  • PI Peter Honeyman, Michigan, 3 years

35
Other NMI Related Funding
  • Keeping Condor Flight Worthy
  • PI Miron Livny, 3 years
  • Core support for Condor group activities
  • Towards a Virtual Solar-Terrestrial Observatory
  • PI Peter Fox, UCAR, 3 years
  • SEIII Program award
  • Dependable Grids
  • PI Andrew Grimshaw, UVA
  • ITR award

36
Cyberinfrastructure the future consists of
  • Computational engines (supercomputers, clusters,
    workstations capability and capacity)
  • Mass storage (disk drives, tapes, ) and
    persistence
  • Networking (including optical, wireless,
    ubiquitous)
  • Digital libraries/data bases
  • Sensors/effectors
  • Software (operating systems, middleware, domain
    specific tools/platforms for building
    applications)
  • Services (education, training, consulting, user
    assistance)

All working together in an integrated fashion.
37
SCI Future Deliverables
  • substantial investments to provide increasingly
    powerful cyberinfrastructure to support demands
    of modeling, data analysis interpretation,
    management research
  • testbeds to develop prove experimental
    technologies
  • expand availability of HPC network resources to
    the broader research education community
  • user-friendly software better software
    integration for effective utilization of advanced
    systems
  • support for technical (support) staff for
    evolving CI

38
Looking to the Future
  • Science frontiers as the drivers
  • Balance capability and capacity
  • the Extensible Terascale Facility (ETF)
  • A strategy of a balanced and broad CI to serve
    all of science and engineering transition from
    extant CI to the exciting possibilities of future
    CI
  • A few specifics
  • Continued support of ETF
  • CI-TEAM solicitation aimed at
    educating/training/advancing/mentoring current
    and future generations of computational
    scientists and engineers
  • Establishing a CI software center? (must address
    Grids Center functions)
  • Post-ITR program(s) and support for experimental
    networking?

39
(No Transcript)
Write a Comment
User Comments (0)
About PowerShow.com