The OptIPuter Project - PowerPoint PPT Presentation

1 / 41
About This Presentation
Title:

The OptIPuter Project

Description:

Director, California Institute for Telecommunications and Information Technologies ... Seismology. Tectonics. Time Series Analysis ... – PowerPoint PPT presentation

Number of Views:195
Avg rating:3.0/5.0
Slides: 42
Provided by: jerrys3
Category:

less

Transcript and Presenter's Notes

Title: The OptIPuter Project


1
The OptIPuter ProjectEliminating Bandwidth as a
Barrier to Collaboration and Analysis
  • DARPA Microsystems Technology Office
  • Arlington, VA
  • December 13, 2002

Dr. Larry Smarr Director, California Institute
for Telecommunications and Information
Technologies Harry E. Gruber Professor, Dept. of
Computer Science and Engineering Jacobs School of
Engineering, UCSD
2
Abstract
The OptIPuter is a radical distributed
visualization, teleimmersion, data mining, and
computing architecture. The National Science
Foundation recently awarded a six-campus research
consortium a five-year large Information
Technology Research grant to construct working
prototypes of the OptIPuter on campus, regional,
national, and international scales. The OptIPuter
project is driven by applications leadership from
two scientific communities, the US National NSF's
EarthScope and the National Institutes of
Health's Biomedical Imaging Research Network
(BIRN), both of which are beginning to produce a
flood of large 3D data objects (e.g., 3D brain
images or a SAR terrain datasets) which are
stored in distributed federated data
repositories. Essentially, the OptIPuter is a
"virtual metacomputer" in which the individual
"processors" are widely distributed Linux PC
clusters the "backplane" is provided by Internet
Protocol (IP) delivered over multiple dedicated
1-10 Gbps optical wavelengths and, the "mass
storage systems" are large distributed scientific
data repositories, fed by scientific instruments
as OptIPuter peripheral devices, operated in near
real-time. Collaboration, visualization, and
teleimmersion tools are provided on tiled mono or
stereo super-high definition screens directly
connected to the OptIPuter to enable distributed
analysis and decision making. The OptIPuter
project aims at the re-optimization of the entire
Grid stack of software abstractions, learning
how, as George Gilder suggests, to "waste"
bandwidth and storage in order to conserve
increasingly "scarce" high-end computing and
people time in this new world of inverted
values.
3
The Move to Data-Intensive Science
Engineering-e-Science Community Resources
LHC
ATLAS
4
A LambdaGrid Will Be the Backbone for an
e-Science Network
Apps Middleware
Clusters
C O N T R O L P L A N E
Dynamically Allocated Lightpaths
Switch Fabrics
Physical Monitoring
Source Joe Mambretti, NU
5
Just Like in Computing --Different FLOPS for
Different Folks
A -gt Need Full Internet Routing B -gt Need VPN
Services On/And Full Internet Routing C -gt Need
Very Fat Pipes, Limited Multiple Virtual
Organizations
Bandwidth consumed
A
Number of users
B
C
DSL
GigE LAN
Source Cees Delaat
6
OptIPuter NSF Proposal Partnered with National
Experts and Infrastructure
SURFnet CERN
Asia Pacific
CAnet4
Vancouver
Seattle
CAnet4
Pacific Light Rail
Portland
Chicago
NYC
UIC NU
PSC
San Francisco
TeraGrid DTFnet
Asia Pacific
NCSA
CENIC
USC
UCI
Los Angeles
UCSD, SDSU
Atlanta
San Diego (SDSC)
AMPATH
Source Tom DeFanti and Maxine Brown, UIC
7
The OptIPuter is an Experimental Network
Research Project
  • Driven by Large Neuroscience and Earth Science
    Data
  • Multiple Lambdas Linking Clusters and Storage
  • LambdaGrid Software Stack
  • Integration with PC Clusters
  • Interactive Collaborative Volume Visualization
  • Lambda Peer to Peer Storage With Optimized
    Storewidth
  • Enhance Security Mechanisms
  • Rethink TCP/IP Protocols
  • NSF Large Information Technology Research
    Proposal
  • UCSD and UIC Lead CampusesLarry Smarr PI
  • USC, UCI, SDSU, NW Partnering Campuses
  • Industrial Partners IBM, Telcordia/SAIC, Chiaro
    Networks
  • 13.5 Million Over Five Years

8
The OptIPuter Frontier Advisory Board
  • Optical Component Research
  • Shaya Fainman, UCSD
  • Sadik Esener, UCSD
  • Alan Willner, USC
  • Frank Shi, UCI
  • Joe Ford, UCSD
  • Optical Networking Systems
  • Dan Blumenthal, UCSB
  • George Papen, UCSD
  • Joe Mambretti, Northwestern University
  • Steve Wallach, Chiaro Networks, Ltd.
  • George Clapp, Telcordia/SAIC
  • Tom West, CENIC
  • Data and Storage
  • Yannis Papakonstantinou, UCSD
  • Paul Siegel, UCSD
  • Clusters, Grid, and Computing
  • Alan Benner, IBM eServer Group, Systems
    Architecture and Performance department
  • Fran Berman, SDSC director

First Meeting February 6-7, 2003
9
The First OptIPuter Workshopon Optical Switch
Products
  • Hosted by Calit2 _at_ UCSD
  • October 25, 2002
  • Organized by Maxine Brown (UIC) and Greg Hidley
    (UCSD)
  • Full Day Open Presentations by Vendors and
    OptIPuter Team
  • Examined Variety of Technology Offerings
  • OEOEO
  • TeraBurst Networks
  • OEO
  • Chiaro Networks
  • OOO
  • Glimmerglass
  • Calient
  • IMMI

10
OptIPuter Inspiration--Node of a 2009 PetaFLOPS
Supercomputer
DRAM 16 GB 64/256 MB - HIGHLY INTERLEAVED
5 Terabits/s
DRAM - 4 GB - HIGHLY INTERLEAVED
MULTI-LAMBDA Optical Network
CROSS BAR
Coherence
  1. GB/s

2nd LEVEL CACHE
2nd LEVEL CACHE 8 MB
8 MB
24 Bytes wide
240 GB/s
VLIW/RISC CORE 40 GFLOPS 10 GHz
VLIW/RISC CORE 40 GFLOPS 10 GHz
...
Updated From Steve Wallach, Supercomputing 2000
Keynote
11
Global Architecture of a 2009 COTS PetaFLOPS
System
128 Die/Box 4 CPU/Die
  • meters
  • 50 nanosec Delay

3
4
...
5
2
16
1
17
64
ALL-OPTICAL SWITCH
18
63
...
...
32
49
48
Multi-Die Multi-Processor
...
33
47
46
I/O
Systems Become GRID Enabled
LAN/WAN
Source Steve Wallach, Supercomputing 2000 Keynote
12
Convergence of Networking Fabrics
  • Today's Computer Room
  • Router For External Communications (WAN)
  • Ethernet Switch For Internal Networking (LAN)
  • Fibre Channel For Internal Networked Storage
    (SAN)
  • Tomorrow's Grid Room
  • A Unified Architecture Of LAN/WAN/SAN Switching
  • More Cost Effective
  • One Network Element vs. Many
  • One Sphere of Scalability
  • ALL Resources are GRID Enabled
  • Layer 3 Switching and Addressing Throughout

Source Steve Wallach, Chiaro Networks
13
The OptIPuter Philosophy
Bandwidth is getting cheaper faster than
storage.Storage is getting cheaper faster than
computing. Exponentials are crossing.
A global economy designed to waste transistors,
power, and silicon area -and conserve bandwidth
above all- is breaking apart and reorganizing
itself to waste bandwidth and conserve power,
silicon area, and transistors." George Gilder
Telecosm (2000)
14
From SuperComputers to SuperNetworks--Changing
the Grid Design Point
  • The TeraGrid is Optimized for Computing
  • 1024 IA-64 Nodes Linux Cluster
  • Assume 1 GigE per Node 1 Terabit/s I/O
  • Grid Optical Connection 4x10Gig Lambdas 40
    Gigabit/s
  • Optical Connections are Only 4 Bisection
    Bandwidth
  • The OptIPuter is Optimized for Bandwidth
  • 32 IA-64 Node Linux Cluster
  • Assume 1 GigE per Processor 32 gigabit/s I/O
  • Grid Optical Connection 4x10GigE 40 Gigabit/s
  • Optical Connections are Over 100 Bisection
    Bandwidth

15
Data Intensive Scientific Applications Require
Experimental Optical Networks
  • Large Data Challenges in Neuro and Earth Sciences
  • Each Data Object is 3D and Gigabytes
  • Data are Generated and Stored in Distributed
    Archives
  • Research is Carried Out on Federated Repository
  • Requirements
  • Computing Requirements ? PC Clusters
  • Communications ? Dedicated Lambdas Over Fiber
  • Data ? Large Peer-to-Peer Lambda Attached Storage
  • Visualization ? Collaborative Volume Algorithms
  • Response
  • OptIPuter Research Project

16
The Biomedical Informatics Research Network a
Multi-Scale Brain Imaging Federated Repository
BIRN Test-beds Multiscale Mouse Models of
Disease, Human Brain Morphometrics, and FIRST
BIRN (10 site project for fMRIs of
Schizophrenics)
NIH Plans to Expand to Other Organs and Many
Laboratories
17
Microscopy Imaging of Neural Tissue
Marketta Bobik
Francisco Capani Eric Bushong
Confocal image of a sagittal section through rat
cortex triple labeled for glial fibrillary
acidic protein (blue), neurofilaments (green)
and actin (red)
Projection of a series of optical sections
through a Purkinje neuron revealing both the
overall morphology (red) and the dendritic
spines (green)
http//ncmir.ucsd.edu/gallery.html
18
Interactive Visual Analysis of Large Datasets
--East Pacific Rise Seafloor Topography
Scripps Institution of Oceanography Visualization
Center
http//siovizcenter.ucsd.edu/library/gallery/shoot
1/index.shtml
19
Tidal Wave Threat AnalysisUsing Lake Tahoe
Bathymetry
Graham Kent, SIO
Scripps Institution of Oceanography Visualization
Center
http//siovizcenter.ucsd.edu/library/gallery/shoot
1/index.shtml
20
SIO Uses the Visualization Center to Teach a
Wide Variety of Graduate Classes
  • Geodesy
  • Gravity and Geomagnetism
  • Planetary Physics
  • Radar and Sonar Interferometry
  • Seismology
  • Tectonics
  • Time Series Analysis

Deborah Kilb Frank Vernon, SIO
Multiple Interactive Views of Seismic Epicenter
and Topography Databases
http//siovizcenter.ucsd.edu/library/gallery/shoot
2/index.shtml
21
NSFs EarthScope Rollout Over 14 Years Starting
With Existing Broadband Stations
22
Metro Optically Linked Visualization Wallswith
Industrial Partners Set Stage for Federal Grant
  • Driven by SensorNets Data
  • Real Time Seismic
  • Environmental Monitoring
  • Distributed Collaboration
  • Emergency Response
  • Linked UCSD and SDSU
  • Dedication March 4, 2002

Linking Control Rooms
UCSD
SDSU
Cox, Panoram, SAIC, SGI, IBM, TeraBurst
Networks SD Telecom Council
44 Miles of Cox Fiber
23
Extending the Optical Grid to Oil and Gas
Research
  • Society for Exploration Geophysicists in Salt
    Lake City Oct. 6-11, 2002
  • Optically Linked Visualization Walls
  • 80 Miles of Fiber from BP Visualization Lab from
    Univ. of Colorado
  • OC-48 Both Ways
  • Interactive Collaborative Visualization of
    Seismic Cubes Reservoir Models
  • SGI, TeraBurst Industrial Partners
  • Organized by SDSU and Cal-(IT)2

Source Eric Frost, SDSU
24
The UCSD OptIPuter Deployment
The OptIPuter Experimental UCSD Campus Optical
Network
To CENIC
Phase I, Fall 02
Phase I, Fall 02
Phase II, 2003
Phase II, 2003
Collocation point
Collocation point
Production Router (Planned)
SDSC
SDSC
SDSCAnnex
SDSCAnnex
Preuss
High School
JSOE
Engineering
UCSD New Cost Sharing Roughly 250k of
Dedicated Fiber
CRCA
Arts
SOM
Medicine
6th College
UndergradCollege
Chemistry
Phys. Sci -Keck
Roughly, 0.20 / Strand-Foot
Node M
Collocation
Chiaro Router (Installed Nov 18, 2002)
SIO
Earth Sciences
Source Phil Papadopoulos, SDSC Greg Hidley,
Cal-(IT)2
25
OptIPuter LambdaGridEnabled by Chiaro Networking
Router
www.calit2.net/news/2002/11-18-chiaro.html
  • Cluster Disk
  • Disk Disk
  • Viz Disk
  • DB Cluster
  • Cluster Cluster

Image Source Phil Papadopoulos, SDSC
26
We Chose OptIPuter for Fast Switching and
Scalability
Large Port Count
Electrical Fabrics
Small Port Count
Lithium Niobate
l Switching Speeds (ms)
Packet Switching Speeds (ns)
27
Optical Phased Array Multiple Parallel Optical
Waveguides
Output Fibers
GaAs Waveguides
WG 1
WG 128
Air Gap
28
Chiaro Has a Scalable, Fully Fault Tolerant
Architecture
  • Significant Technical Innovation
  • OPA Fabric Enables Large Port Count
  • Global Arbitration Provides Guaranteed
    Performance
  • Fault-Tolerant Control System Provides Non-stop
    Performance
  • Smart Line Cards
  • ASICs With Programmable Network Processors
  • Software Downloads For Features And Standards
    Evolution

Network Proc. Line Card
Network Proc. Line Card
Chiaro OPA Fabric
Network Proc. Line Card
Network Proc. Line Card
Global Arbitration
Optical
Electrical
29
Planned Chicago Metro Lambda Switching OptIPuter
Laboratory
Source Tom DeFanti, UIC
30
OptIPuter Software Research
  • Near-term Build Software To Support Advancement
    Of Applications With Traditional Models
  • High Speed IP Protocol Variations (RBUDP, SABUL,
    )
  • Switch Control Software For DWDM Management And
    Dynamic Setup
  • Distributed Configuration Management For
    OptIPuter Systems
  • Long-Term Goals To Develop
  • System Model Which Supports Grid, Single System,
    And Multi-System Views
  • Architectures Which Can
  • Harness High Speed DWDM
  • Present To The Applications And Protocols
  • New Communication Abstractions Which Make
    Lambda-Based Communication Easily Usable
  • New Communication Data Services Which Exploit
    The Underlying Communication Abstractions
  • Underlying Data Movement Management Protocols
    Supporting These Services
  • Killer App Drivers And Demonstrations Which
    Leverage This Capability Into The Wireless
    Internet

Source Andrew Chien, UCSD
31
OptIPuter System Opportunities
  • Whats The Right View Of The System?
  • Grid View
  • Federation Of Systems Autonomously Managed,
    Separate Security, No Implied Trust
    Relationships, No Transitive Trust
  • High Overhead Administrative And Performance
  • Web Services And Grid Services View
  • Single System View
  • More Static Federation Of Systems
  • A Single Trusted Administrative Control, Implied
    Trust Relationships, Transitive Trust
    Relationships
  • But This Is Not Quite A Closed System Box
  • High Performance
  • Securing A Basic System And Its Capabilities
  • Communication, Data, Operating System
    Coordination Issues
  • Multi-System View
  • Can We Create Single System Views Out Of Grid
    System Views?
  • Delivering The Performance Boundaries On Trust

Source Andrew Chien, UCSD
32
OptIPuter Communication Challenges
  • Terminating A Terabit Link In An Application
    System
  • --gt Not A Router
  • Parallel Termination With Commodity Components
  • N 10GigE Links -gt N Clustered Machines (Low Cost)
  • Community-Based Communication
  • What Are
  • Efficient Protocols to Move Data in Local,
    Metropolitan, Wide Area?
  • High Bandwidth, Low Startup
  • Dedicated Channels, Shared Endpoints
  • Good Parallel Abstractions For Communication?
  • Coordinate Management And Use Of Endpoints And
    Channels
  • Convenient For Application, Storage System
  • Secure Models For Single System View
  • Enabled By Lambda Private Channels
  • Exploit Flexible Dispersion Of Data And
    Computation

Source Andrew Chien, UCSD
33
OptIPuter Storage Challenges
  • DWDM Enables Uniform Performance View Of Storage
  • How To Exploit Capability?
  • Other Challenges Remain Security, Coherence,
    Parallelism
  • Storage Is a Network Device
  • Grid View High-Level Storage Federation
  • GridFTP (Distributed File Sharing)
  • NAS File System Protocols
  • Access-control and Security in Protocol
  • Performance?
  • Single-System View Low-Level Storage Federation
  • Secure Single System View
  • SAN Block Level Disk and Controller Protocols
  • High Performance
  • Security? Access Control?
  • Secure Distributed Storage Threshold
    Cryptography Based Distribution
  • PASIS Style Distributed Shared Secrets
  • Lambdas Minimize Performance Penalty

Source Andrew Chien, UCSD
34
OptIPuter is Exploring Quanta as a High
Performance Middleware
  • Quanta is a high performance networking toolkit /
    API.
  • Reliable Blast UDP
  • Assumes you are running over an over-provisioned
    or dedicated network.
  • Excellent for photonic networks, dont try this
    on commodity Internet.
  • It is FAST!
  • It is very predictable.
  • We give you a prediction equation to predict
    performance. This is useful for the application.
  • It is most suited for transfering very large
    payloads.
  • At higher data rates processor is 100 loaded so
    dual processors are needed for your application
    to move data and do useful work at the same time.

Source Jason Leigh, UIC
35
TeraVision Over WAN Greece to Chicago
Throughput
TCP Performance Over WAN Is Poor Windows
Performance Is Lower Than Linux Synchronization
Reduces Frame Rate.
36
Reliable Blast UDP (RBUDP)
  • At IGrid 2002 all applications which were able to
    make the most effective use of the 10G link from
    Chicago to Amsterdam used UDP
  • RBUDP1, SABUL2 and Tsunami3 are all similar
    protocols that use UDP for bulk data transfer-
    all of which are based on NETBLT- RFC969
  • RBUDP has fewer memory copies a prediction
    function to let applications know what kind of
    performance to expect.
  • 1 J. Leigh, O. Yu, D. Schonfeld, R. Ansari, et
    al., Adaptive Networking for Tele-Immersion,
    Proc. Immersive Projection Technology/Eurographics
    Virtual Environments Workshop (IPT/EGVE), May
    16-18, Stuttgart, Germany, 2001.
  • 2 Sivakumar Harinath, Data Management Support
    for Distributed Data Mining of Large Datasets
    over High Speed Wide Area Networks, PhD thesis,
    University of Illinois at Chicago, 2002.
  • 3 http//www.indiana.edu/anml/anmlresearch.html

Source Jason Leigh, UIC
37
Visualization at Near Photographic ResolutionThe
OptIPanel Version I
5x3 Grid of 1280x1024 Pixel LCD Panels Driven by
16-PC Cluster Resolution6400x3072 Pixels, or
3000x1500 pixels in Autostereo
Source Tom DeFanti, EVL--UIC
38
NTT Super High Definition Video (NTT 4Kx2K8
Megapixels) Over Internet2
Starlight in Chicago
Applications Astronomy Mathematics Entertainment
www.ntt.co.jp/news/news02e/0211/021113.html
SHD 4xHDTV 16xDVD
USC In Los Angeles
39
The Continuum at EVL and TRECCOptIPuter
Amplified Work Environment
Passive stereo display
AccessGrid
Digital white board
Tiled display
Source Tom DeFanti, Electronic Visualization
Lab, UIC
40
OptIPuter Transforms Individual Laboratory
Visualization, Computation, Analysis Facilities
Fast polygon and volume rendering with
stereographics


GeoWall

3D APPLICATIONS
Underground Earth Science
Earth Science
GeoFusion GeoMatrix Toolkit
Rob Mellors and Eric Frost, SDSUSDSC Volume
Explorer
The Preuss School UCSD OptIPuter Facility
41
Providing a 21st Century Internet Grid
Infrastructure
Wireless Sensor Nets, Personal Communicators
Routers
Tightly Coupled Optically-Connected OptIPuter Core
Routers
Loosely Coupled Peer-to-Peer Computing Storage
Write a Comment
User Comments (0)
About PowerShow.com