David P. Anderson Space Sciences Lab U.C. Berkeley - PowerPoint PPT Presentation

1 / 11
About This Presentation
Title:

David P. Anderson Space Sciences Lab U.C. Berkeley

Description:

Volunteer Computing: The Big Cloud David P. Anderson Space Sciences Lab U.C. Berkeley What individuals own 1 billion PCs 100 million CUDA-capable GPUs 3.3 billion ... – PowerPoint PPT presentation

Number of Views:87
Avg rating:3.0/5.0
Slides: 12
Provided by: berk46
Category:

less

Transcript and Presenter's Notes

Title: David P. Anderson Space Sciences Lab U.C. Berkeley


1

Volunteer Computing The Big Cloud
David P. AndersonSpace Sciences LabU.C.
Berkeley
2
What individuals own
  • 1 billion PCs
  • 100 million CUDA-capable GPUs
  • 3.3 billion mobile devices
  • Its the biggest cloud (always will be).
  • Its underutilized.

3
How to access this resource?
  • One approach ask for volunteers
  • BOINC middleware for this model

Projects
World Comm. Grid
Volunteer
Climateprediction.net
Rosetta_at_home
Superlink_at_Technion
4
A brief history of volunteer computing
1995
2005
2005
2000
2008
distributed.net, GIMPS
SETI_at_home, Folding_at_home
Applications
Predictor_at_home, WCG, Einstein, Rosetta, ...
Academic Bayanihan, Javelin, ...
Platforms
Commercial Entropia, United Devices, ...
BOINC
5
Volunteer computing today
  • 1,000,000 volunteer PCs
  • 50 projects
  • 3 PetaFLOPS
  • PS3, GPUs, CPUs about 1/3 each

6
Cost per TeraFLOPS-year
  • Cluster 124,000
  • Amazon EC2 1,750,000
  • Volunteer computing 2,000

7
Nice things about the Big Cloud
  • self-maintaining
  • self-upgrading
  • leverages consumer electronics
  • energy usage
  • distributed
  • volunteers pay
  • free if PCs replace heaters

8
Inconveniences of the Big Cloud
  • hardware, software heterogeneity
  • nodes have high churn
  • nodes are anonymous, untrusted, unreliable
  • nodes are sporadically available
  • when they are available, you must be invisible
  • limited, constrained network connectivity
  • Different from Grid, Cloud

9
Whats it good for?
  • high-throughput computing
  • physical simulations
  • compute-intensive data analysis
  • genetic and Monte-Carlo algorithms
  • high-latency distributed storage

10
Virtualization
  • Client side (application VM image)?
  • solves many heterogeneity problems
  • what about large image sizes?
  • Cern VM
  • build minimal VM for app (100 MB)?
  • install and cache packages
  • use client for existing job queueing system
    within VM
  • Server side
  • BOINC virtual server
  • BOINC server on a commercial cloud?

11
Mobile devices
  • Hardware convergence
  • Processor
  • 4 GFLOPS/watt (10x BlueGene L)?
  • Software
  • Android
  • BOINCoid (Technion)?
  • OpenMoko?
Write a Comment
User Comments (0)
About PowerShow.com