GLOW%20A%20Campus%20Grid%20within%20OSG - PowerPoint PPT Presentation

About This Presentation
Title:

GLOW%20A%20Campus%20Grid%20within%20OSG

Description:

Their excess jobs flock to GLOW and CS pools. ... Submitting jobs through OSG to UW Campus Grid. schedd (Job caretaker) startd (Job Executor) ... – PowerPoint PPT presentation

Number of Views:41
Avg rating:3.0/5.0
Slides: 12
Provided by: Jor559
Category:

less

Transcript and Presenter's Notes

Title: GLOW%20A%20Campus%20Grid%20within%20OSG


1
GLOWA Campus Grid within OSG
  • University of Wisconsin, Madison
  • Miron Livny, Sridhara Dasu, Todd Tannenbaum, Sean
    Murphy, Erik Paulson, Jeff Weber, Dan Bradley
  • http//www.cs.wisc.edu/condor/glow

2
Overview GLOW OSG
  • Why is a campus-level grid so valuable to us?
  • Why are we a part of OSG?
  • How do we make them work together?

3
Why have a campus or enterprise grid?
  • very high utilizationmore diverse users less
    wasted cycles
  • simplicityAll we need is Condor at campus level.
    Plus, we get the full feature-set rather than
    lowest common denominator.
  • collective buying powerWe speak to vendors with
    one voice.
  • consolidated administrationFewer chores for
    scientists. Fewer holes for hackers.
  • synergyFace-to-face technical meetings between
    members.Mailing list scales well at campus level.

4
Why is GLOW part of OSG?
  • We can always use more resources.
  • But we want to share when we have a surplus.
  • Our users want to collaborate outside the bounds
    of the campus (e.g. Atlas and CMS). Others may
    join that trend.
  • OSG does not infringe on our local control.The
    OSG grid interface does not limit our choice of
    technology within the campus grid, because it
    strives to remain independent of it.

5
What is the UW Campus Grid?
  • Condor pools at various departments, made
    accessible via Condor flocking
  • Users submit jobs to their own private or
    department Condor scheduler.
  • Jobs are dynamically matched to available
    machines.
  • No cross-campus NFS for file access.
  • People use Condor remote I/O, sandboxes, AFS,
    dCache, etc.

6
How big is the UW campus grid?
  • GLOW Condor pool is distributed across the campus
    to provide locality with big users.
  • 1000 2.8 GHz Xeons
  • 100 TB disk
  • Over 7.6 million CPU-hours served!
  • Computer Science Condor pool
  • 1000 CPUs
  • High Energy Physics Condor pool
  • 60 2 GHz CPUs
  • (expanding to house CMS Tier-2 computing
    facility)
  • Other private pools serve as submission and
    execution points for some users. Their excess
    jobs flock to GLOW and CS pools.

7
Who Uses UW Campus Grid?
  • Computational Genomics, Chemistry
  • High Energy Physics (CMS, Atlas, Zeus)
  • Materials by Design, Chemical Engineering
  • Radiation Therapy, Medical Physics
  • Computer Science
  • Amanda, Ice-cube, Physics/Space Science
  • Condensed Matter, Physics
  • Diverse users with different conference
    deadlines, and usage patterns.

8
Submitting Jobs within UW Campus Grid
HEP matchmaker
CS matchmaker
GLOW matchmaker
flocking
  • Supports full feature-set of Condor
  • matchmaking
  • remote system calls
  • checkpointing
  • MPI universe
  • suspension VMs
  • preemption policies

9
Submitting jobs through OSG to UW Campus Grid
10
Routing Jobs from UW Campus Grid to OSG
HEP matchmaker
CS matchmaker
GLOW matchmaker
Grid JobRouter
  • Best of both worlds
  • simple, feature-rich local mode
  • transformable to standard OSG job for traveling
    globally

11
Conclusions GLOW OSG
  • Our UW Condor grid is not a collection of OSG
    mini-sites, nor do we intend it to be.
  • However, interoperability with the OSG has been
    an increasingly important activity for us,
    because it brings real benefits to our users.
  • This model is emerging on many campuses today.
    We believe tomorrows researches will expect and
    demand to be well connected to both local and
    global computing resources.
Write a Comment
User Comments (0)
About PowerShow.com