FIU CHEPREO Tier3 Site - PowerPoint PPT Presentation

1 / 9
About This Presentation
Title:

FIU CHEPREO Tier3 Site

Description:

Network connectivity resolved using a Cisco Cat 3750 ( stackable, expandable, L2 ... Gigabit ( Multimode Fiber ) connectivity to AMPATH Network / Abilene ... – PowerPoint PPT presentation

Number of Views:102
Avg rating:3.0/5.0
Slides: 10
Provided by: chep3
Category:

less

Transcript and Presenter's Notes

Title: FIU CHEPREO Tier3 Site


1
FIU / CHEPREO Tier3 Site
  • Ernesto RubiResearch Network Engineer CHEPREO
    - Florida International University
  • ernesto_at_ampath.net

2
FIU Tier3 Timeline
  • November 2003
  • Team consultations regarding hardware
    requirements. Quotes requested and vendor
    selection process.
  • Configuration
  • Intel Xeon 2.66Ghz - 512K Cache - 533mhz bus
    speed, 1U, retail box / Dual Cu GigE interfaces
  • Dual 125 Gb , 7200 RPM IDE HDDs, Raid 0.
  • December 2003
  • Discussions with other CHEPREO Tier2 sites
    regarding software selection process ( cluster
    framework Rocks 3.0.0, scheduler OpenPBS,
    Grid3 VDT cache ).
  • Formal request to join Grid3 - Sponsored by
    iVDGL.
  • Mid Spring 2004
  • Received first 5 node shipment. ( 1 FrontEnd
    node - 4 Worker Nodes configuration ).
  • Installation and configuration of Rocks 3.0.0 /
    VDT Cache
  • Joined Grid3 and first jobs are submitted
  • Cluster housed at FIU UP Campus ( PLC Space )

3
FIU Tier3 Timeline
  • Summer 2004
  • Need arises for a permanent space to house
    clusster need room for future expansion.
  • Cluster temporarily moved to AMPATH Cabinets at
    NAP of the Americas

4
FIU Tier3 Timeline
  • Summer 2004
  • Network connectivity resolved using a Cisco Cat
    3750 ( stackable, expandable, L2/L3 capabable
    switch w/ 24 GigE Cu ports and 4 Fiber SFP ports
    ).
  • FLR Infrastructure built-out at NAP, CHEPREO
    obtains a one-year contract with NAP to populate
    a 7 cabinet with 20-1U nodes. Power 2 - 20 AMP
    circuits, MPR fiber, 1U management console and IP
    KVM switch.
  • NAP delays cabinet build-out fiber supply
    shortage/delay due to shipping problems during
    active 2004 hurricane season.
  • July 2004
  • Cluster moves to permanent cabinet home within
    FLR cage.
  • New shipment of 19 worker nodes arrives
  • Configuration Same as original 5 nodes
  • August 2004
  • New nodes are integrated into the existing
    cluster and the following is obtained
  • 1 FrontEnd node / 21 Compute Nodes / 2 Spare
    Nodes - Hot StandBy
  • 22 Node Cluster on Grid3 - Grid jobs running
    Current Operating Size Today
  • Scheduled site visit from Jeff McDonald _at_ FSU for
    early December

5
FIU Tier3 Timeline
  • Summer 2004
  • QuarkNet Institute Presentation outlining to
    MDCPS physics community the status of grid
    computing at CHEPREO.
  • Interested received from workshop participants.
  • September 2004
  • OSG Workshop _at_ Harvard University. Discussed
    technical structure and roadmap, introduction of
    OSG to other sciences, governance.
  • Fall 2004
  • Continued cluster operation.
  • Grid3 user technical support, iGOC relationship
    established and contact info published/
    distributed.
  • Remote monitoring of network health established
    via Nagios / MRTG

6
FIU Tier3 Timeline
  • Winter 2004
  • OSG Operations Meeting _at_ IUPUI ( Indianapolis ).
    Discuss operations requirements and model.
  • Jeff McDonald ( FSU ) - NAP Site Visit
  • Install CMS Analysis software and issue local
    accounts for application scientists.
  • Local CMS Monte Carlo simulations begin

7
FIU Grid3 Timeline
  • Winter 2004
  • Site Visit to UF Tier2 Center
  • 2 day site visit - Refine current cluster
    configuration.
  • Discuss current layout and future cluster
    components ( I.e storage elements / separate CMS
    analysis cluster ).
  • Continued cluster operation.
  • Usage Study
  • For further review please access the Ganglia
    monitoring site running on FrontEnd node
  • http//grid3.chepreo.org

8
Network Connectivity
  • Current Cluster configuration
  • Gigabit Copper backbone - 22 worker nodes, 1
    FrontEnd node.
  • Gigabit ( Multimode Fiber ) connectivity to
    AMPATH Network / Abilene
  • Future Expansion
  • Catalyst 3750 stack bus has a throughput capacity
    of 40 Gbps.
  • Additional Cat 3750 switches will allow for
    cluster expansion without compromising network
    performance.
  • FLR network will allow for even faster transfer
    times between Florida CHEPREO Grid sites. ( 10
    Gbps connectivity ).

9
Year 3 Milestones
  • Summer 2005
  • User community dissemination improvements
  • Rocks 3.3.0 / Red Hat Scientific Linux Upgrade
  • Migration to OSG along with rest of the Grid3
    community
  • Hardware Requirements Storage Element ( Raid
    File Server )
  • Continuing Operational Improvements ( Integration
    to monitoring projects ).
  • For further information
  • http//grid3.ampath.net
  • grid-support_at_chepreo.org
Write a Comment
User Comments (0)
About PowerShow.com