Title: Supercomputer End Users: the OptIPuter Killer Application
1Supercomputer End Usersthe OptIPuter Killer
Application
- Keynote
- DREN Networking and Security Conference
- San Diego, CA
- August 13, 2008
Dr. Larry Smarr Director, California Institute
for Telecommunications and Information
Technology Harry E. Gruber Professor, Dept. of
Computer Science and Engineering Jacobs School of
Engineering, UCSD
2Abstract
During the last few years, a radical
restructuring of optical networks supporting
e-Science projects has occurred around the world.
U.S. universities are beginning to acquire access
to high bandwidth lightwaves (termed "lambdas")
on fiber optics through the National LambdaRail,
Internet2's Circuit Services, and the Global
Lambda Integrated Facility. The NSF-funded
OptIPuter project explores how user controlled 1-
or 10- Gbps lambdas can provide direct access to
global data repositories, scientific instruments,
and computational resources from the researcher's
Linux clusters in their campus laboratories.
These end user clusters are reconfigured as
"OptIPortals," providing the end user with local
scalable visualization, computing, and storage.
Integration of high definition video with
OptIPortals creates a high performance
collaboration workspace of global reach. An
emerging major new user community are end users
of NSFs TeraGrid and DODs HPCMP, allowing them
to directly optically connect to the remote Tera
or Peta-scale resources from their local
laboratories and to bring disciplinary experts
from multiple sites into the local data and
visualization analysis process.
3What we really have to do is eliminate distance
between individuals who want to interact with
other people and with other computers.? Larry
Smarr, Director, NCSA
Interactive Supercomputing Collaboratory
Prototype Using Analog Communications to
Prototype the Fiber Optic Future
SIGGRAPH 1989
Illinois
Boston
Were using satellite technologyto demo what
It might be like to have high-speed fiber-optic
links between advanced computers in two
different geographic locations. ? Al Gore,
Senator Chair, US Senate Subcommittee on
Science, Technology and Space
4Chesapeake Bay Simulation Collaboratory vBNS
Linked CAVE, ImmersaDesk, Power Wall, and
Workstation
Alliance Project Collaborative Video
Productionvia Tele-Immersion and Virtual Director
Alliance Application Technologies Environmental
Hydrology Team
Alliance 1997
4 MPixel PowerWall
UIC
Donna Cox, Robert Patterson, Stuart Levy, NCSA
Virtual Director Team Glenn Wheless, Old Dominion
Univ.
5ASCI Brought Scalable Tiled Walls to Support
Visual Analysis of Supercomputing Complexity
1999
LLNL Wall--20 MPixels (3x5 Projectors)
An Early sPPM Simulation Run Source LLNL
660 Million Pixels Projected Wall Driven By
Commodity PC Cluster
At 15 Frames/s, The System Can Display 2.7 GB/Sec
2002
Source Philip D. Heermann, DOE ASCI Program
7ChallengeHow to Bring This Visualization
Capability to the Supercomputer End User?
2004
35Mpixel EVEREST Display ORNL
8The OptIPuter Project Creating High Resolution
Portals Over Dedicated Optical Channels to
Global Science Data
Scalable Adaptive Graphics Environment (SAGE)
Now in Sixth and Final Year
Picture Source Mark Ellisman, David Lee, Jason
Leigh
Calit2 (UCSD, UCI), SDSC, and UIC LeadsLarry
Smarr PI Univ. Partners NCSA, USC, SDSU, NW,
TAM, UvA, SARA, KISTI, AIST Industry IBM, Sun,
Telcordia, Chiaro, Calient, Glimmerglass, Lucent
9Challenge Average Throughput of NASA Data
Products to End User is 50 Mbps
Tested May 2008
Internet2 Backbone is 10,000 Mbps! Throughput is
lt 0.5 to End User
http//ensight.eos.nasa.gov/Missions/aqua/index.sh
tml
10Dedicated 10Gbps Lambdas Provide
Cyberinfrastructure Backbone for U.S. Researchers
10 Gbps per User 200x Shared Internet
Throughput
Interconnects Two Dozen State and Regional
Optical Networks
Internet2 Dynamic Circuit Network Under
Development
NLR 40 x 10Gb Wavelengths Expanding with
Darkstrand to 80
119Gbps Out of 10 Gbps Disk-to-Disk Performance
Using LambdaStream between EVL and Calit2
CAVEWave20 senders to 20 receivers (point to
point ) Effective Throughput 9.01 Gbps(San
Diego to Chicago) 450.5 Mbps disk to disk
transfer per stream Effective Throughput 9.30
Gbps(Chicago to San Diego) 465 Mbps disk to
disk transfer per stream
TeraGrid 20 senders to 20 receivers (point to
point ) Effective Throughput 9.02 Gbps(San
Diego to Chicago) 451 Mbps disk to disk transfer
per stream Effective Throughput 9.22
Gbps(Chicago to San Diego) 461 Mbps disk to
disk transfer per stream
Dataset 220GB Satellite Imagery of Chicago
courtesy USGS. Each file is 5000 x 5000 RGB image
with a size of 75MB i.e 3000 files
Source Venkatram Vishwanath, UIC EVL
12NLR/I2 is Connected Internationally viaGlobal
Lambda Integrated Facility
Source Maxine Brown, UIC and Robert Patterson,
NCSA
13OptIPuter / OptIPortalScalable Adaptive Graphics
Environment (SAGE) Applications
MagicCarpet Streaming Blue Marble dataset from
San Diego to EVL using UDP. 6.7Gbps
Bitplayer Streaming animation of tornado
simulation using UDP. 516 Mbps
9 Gbps in Total. SAGE Can Simultaneously
Support These Applications Without Decreasing
Their Performance
SVC Locally streaming HD camera live video using
UDP. 538Mbps
JuxtaView Locally streaming the aerial
photography of downtown Chicago using TCP. 850
Mbps
Source Xi Wang, UIC/EVL
14OptIPuter Software Architecture--a
Service-Oriented Architecture Integrating Lambdas
Into the Grid
Globus
XIO
GSI
GRAM
GTP
XCP
UDT
LambdaStream
CEP
RBUDP
15Two New Calit2 Buildings Provide New
Laboratories for Living in the Future
- Convergence Laboratory Facilities
- Nanotech, BioMEMS, Chips, Radio, Photonics
- Virtual Reality, Digital Cinema, HDTV, Gaming
- Over 1000 Researchers in Two Buildings
- Linked via Dedicated Optical Networks
UC Irvine
www.calit2.net
Preparing for a World in Which Distance is
Eliminated
16The Calit2 1/4 Gigapixel OptIPortals at UCSD and
UCI Are Joined to Form a Gbit/s HD Collaboratory
UCSD Wall to Campus Switch at 10 Gbps
Calit2_at_ UCSD wall
NASA Ames Visit Feb. 29, 2008
UCSD cluster 15 x Quad core Dell XPS with Dual
nVIDIA 5600s UCI cluster 25 x Dual Core Apple G5
17Cisco Telepresence Provides Leading Edge
Commercial Video Teleconferencing
- 191 Cisco TelePresence in Major Cities Globally
- US/Canada 83 CTS 3000, 46 CTS 1000
- APAC 17 CTS 3000, 4 CTS 1000
- Japan 4 CTS 3000, 2 CTS 1000
- Europe 22 CTS 3000, 10 CTS 1000
- Emerging 3 CTS 3000
- Overall Average Utilization is 45
- 13,450 Meetings Avoided Travel Average to Date
(Based on 8 Participants) - 107.60 M To Date
- Cubic Meters of Emissions Saved 16,039,052
(6,775 Cars off the Road)
- 85,854 TelePresence Meetings Scheduled to Date
- Weekly Average is 2,263 Meetings
- 108,736 Hours
- Average is 1.25 Hours
Uses QoS Over Shared Internet 15 mbps
Cisco Bought WebEx
Source Cisco 3/22/08
18e-Science Collaboratory Without Walls Enabled by
Uncompressed HD Telepresence Over 10Gbps
iHDTV 1500 Mbits/sec Calit2 to UW Research
Channel Over NLR
May 23, 2007
John Delaney, PI LOOKING, Neptune
Photo Harry Ammons, SDSC
19OptIPlanet Collaboratory Persistent
Infrastructure Supporting Microbial Research
Photo Credit Alan Decker
Feb. 29, 2008
Ginger Armbrusts Diatoms Micrographs,
Chromosomes, Genetic Assembly
iHDTV 1500 Mbits/sec Calit2 to UW Research
Channel Over NLR
UWs Research Channel Michael Wellings
20OptIPortalsAre Being Adopted Globally
UZurich
SARA- Netherlands
Brno-Czech Republic
U. Melbourne, Australia
Calit2_at_UCI
Calit2_at_UCI
21Green Initiative Can Optical Fiber Replace
Airline Travel for Continuing Collaborations?
Source Maxine Brown, OptIPuter Project Manager
22AARNet International Network
23Launch of the 100 Megapixel OzIPortal Over
Qvidium Compressed HD on 1 Gbps CENIC/PW/AARNet
Fiber
No Calit2 Person Physically Flew to Australia to
Bring This Up!
January 15, 2008
Covise, Phil Weber, Jurgen Schulze, Calit2 CGLX,
Kai-Uwe Doerr , Calit2 www.calit2.net/newsroom/rel
ease.php?id1219
24Victoria Premier and Australian Deputy Prime
Minister Asking Questions
www.calit2.net/newsroom/release.php?id1219
25University of Melbourne Vice Chancellor Glyn
Davis in Calit2 Replies to Question from
Australia
26OptIPuterizing Australian Universities in
2008CENIC Coupling to AARNet
UMelbourne/Calit2 Telepresence Session May 21,
2008 Two Week Lecture Tour of Australian
Research Universities by Larry Smarr October 2008
Phil Scanlan Founder-Australian American
Leadership Dialogue www.aald.org
AARNet's roadmap by 2011 up to 80 x 40 Gbit
channels
27First Trans-Pacific Super High Definition
Telepresence Meeting Using Digital Cinema 4k
Streams
4k 4000x2000 Pixels 4xHD
Streaming 4k with JPEG 2000 Compression ½
gigabit/sec
100 Times the Resolution of YouTube!
Lays Technical Basis for Global Digital
Cinema Sony NTT SGI
Calit2_at_UCSD Auditorium
28From Digital Cinema to Scientific Visualization
JPL Supercomputer Simulation of Monterey Bay
4k Resolution 4 x High Definition
Source Donna Cox, Robert Patterson, NCSA Funded
by NSF LOOKING Grant
29Rendering Supercomputer Data at Digital Cinema
Resolution
Source Donna Cox, Robert Patterson, Bob
Wilhelmson, NCSA
30EVLs SAGE Global Visualcasting to Europe
September 2007
Gigabit Streams
Source Luc Renambot, EVL
31Creating a California Cyberinfrastructure of
OptIPuter On-Ramps to NLR TeraGrid Resources
UC Davis
UC Berkeley
UC San Francisco
UC Merced
UC Santa Cruz
Creating a Critical Mass of OptIPuter End Users
on a Secure LambdaGrid CENIC Workshop at
Calit2 Sept 15-16, 2008
UC Los Angeles
UC Riverside
UC Santa Barbara
UC Irvine
UC San Diego
Source Fran Berman, SDSC , Larry Smarr, Calit2
32CENICs New Hybrid Network - Traditional Routed
IP and the New Switched Ethernet and Optical
Services
14M Invested in Upgrade
Now Campuses Need to Upgrade
Source Jim Dolgonas, CENIC
33The Golden Spike UCSD Experimental Optical
CoreReady to Couple Users to CENIC L1, L2, L3
Services
CENIC L1, L2 Services
Lucent
Glimmerglass
Force10
Funded by NSF MRI Grant
Cisco 6509
OptIPuter Border Router
Source Phil Papadopoulos, SDSC/Calit2 (Quartzite
PI, OptIPuter co-PI)
34Calit2 SunlightOptical Exchange Contains
Quartzite
1045 am Feb. 21, 2008
35Block Layout of UCSD Quartzite/OptIPuter Network
GlimmerglassOOO Switch
50 10 Gbps Lightpaths 10 More to Come
36Calit2 Microbial Metagenomics Cluster-Next
Generation Optically Linked Science Data Server
37Calit2 3D Immersive StarCAVE OptIPortalEnables
Exploration of High Resolution Simulations
15 Meyer Sound Speakers Subwoofer
Connected at 50 Gb/s to Quartzite
30 HD Projectors!
Passive Polarization-- Optimized the
Polarization Separation and Minimized
Attenuation
Source Tom DeFanti, Greg Dawe, Calit2
Cluster with 30 Nvidia 5600 cards-60 GB Texture
Memory
38Next Step Experiment on OptIPuter/OptIPortal
with Remote Supercomputer Power User
1 Billion Light-year Pencil From a 20483
Hydro/N-Body Simulation
M. Norman, R. Harkness, P. Paschos
Working on Putting in Calit2 StarCAVE
Structure of the Intergalactic Medium
1.3 M SUs, NERSC Seaborg 170 TB output
Source Michael Norman, SDSC, UCSD
39The Livermore Lightcone 8 Large AMR Simulations
Covering 10 Billion Years Look Back Time
- 1.5 M SU on LLNL Thunder
- Generated 200 TB Data
- 0.4 M SU Allocated on SDSC DataStar for Data
Analysis Alone
5123 Base Grid, 7 Levels of Adaptive
Refinement?65,000 Spatial Dynamic Range
Livermore Lightcone Tile 8
Source Michael Norman, SDSC, UCSD
40An 8192 x 8192 Image Extracted from Tile 8How
to Display/Explore?
Working on Putting it on Calit2
HIPerWall OptIPortal
Digital Cinema Image
412x
424x
438x
4416x
45300 Million Pixels of Viewing Real EstateFor
Visually Analyzing Supercomputer Datasets
Goal Link Normans Lab OptIPortal Over
Quartzite, CENIC, NLR/TeraGrid to Petascale
Track 2 at Ranger_at_TACC and Kraken_at_NICS by
October 2008