CASTOR: CERN - PowerPoint PPT Presentation

About This Presentation
Title:

CASTOR: CERN

Description:

Title: PowerPoint Presentation Last modified by: Olof Barring Created Date: 1/1/1601 12:00:00 AM Document presentation format: On-screen Show Other titles – PowerPoint PPT presentation

Number of Views:345
Avg rating:3.0/5.0
Slides: 14
Provided by: slacStanf8
Category:
Tags: castor | cern | disk | volume

less

Transcript and Presenter's Notes

Title: CASTOR: CERN


1
CASTOR CERNs data management system
  • CHEP03
  • 25/3/2003
  • Ben Couturier, Jean-Damien Durand, Olof Bärring
    CERN

2
Introduction
  • CERN Advanced STORage Manager
  • Hierarchical Storage Manager used to store user
    and physics files
  • Manages the secondary and tertiary storage
  • History
  • Development started in 1999 based on SHIFT,
    CERN's tape and disk management system since
    beginning of 1990s (SHIFT was awarded the 21st
    Century Achievement Award by Computerworld in
    2001)
  • In production since the beginning of 2001
  • Currently holds more than 9 million files and
    2000 TB of data
  • http//cern.ch/castor/

3
Main Characteristics (1)
  • CASTOR Namespace
  • All files belong to the /castor hierarchy
  • The rights are standard UNIX rights
  • POSIX Interface
  • The files are accessible through a standard POSIX
    interface, all calls are rfio_xxx (e.g.
    rfio_open, rfio_close)
  • RFIO Protocol
  • All remote file access done using the Remote File
    IO protocol, developed at CERN.

4
Main Characteristics (2)
  • Modularity
  • The components in CASTOR have well defined roles
    and interfaces, it is possible to change a
    component without affecting the whole system
  • Highly Distributed System
  • CERN uses a very distributed configuration with
    many disk servers/tape servers.
  • Can also run in more limited environment
  • Scalability
  • The number of disk servers, tape servers, name
    servers is not limited
  • Use of RDBMS (Oracle, MySQL) to improve the
    scalability of some critical components

5
Main Characteristics (3)
  • Tape drive sharing
  • A large number of drives can be shared between
    users or dedicated to some users/experiments
  • Drives can be shared with other applications
    with TSM, for example
  • High Performance Tape Mover
  • Use of threads and circular buffers
  • Overlaid device and network I/O
  • Grid Interfaces
  • A GridFTP daemon interfaced with CASTOR is
    currently in test
  • A SRM Interface (V1.0) for CASTOR has been
    developed

6
Hardware Compatibility
  • CASTOR runs on
  • Linux, Solaris, AIX, HP-UX, Digital UNIX, IRIX
  • The clients and some of the servers run on
    Windows NT/2K
  • Supported drives
  • DLT/SDLT, LTO, IBM 3590, STK 9840, STK9940A/B
    (and old drives already supported by SHIFT)
  • Libraries
  • SCSI Libraries
  • ADIC Scalar, IBM 3494, IBM 3584, Odetics, Sony
    DMS24, STK Powderhorn

7
CASTOR Components
  • Central servers
  • Name Server
  • Volume Manager
  • Volume and Drive Queue Manager (Manages the
    volume and drive queues per device group)
  • UPV (Authorization daemon)
  • Disk subsystem
  • RFIO (Disk Mover)
  • Stager (Disk Pool Manager and Hierarchical
    Resource Manager)
  • Tape Subsystem
  • RTCOPY daemon (Tape Mover)
  • Tpdaemon (PVR)

8
CASTOR Architecture
CUPV

VDQM server
NAME server
VDQM server
RFIO Client
NAME server
STAGER
RTCPD
RTCPD (TAPE MOVER)
RFIOD (DISK MOVER)
VOLUME manager
MSGD
DISK POOL
9
CASTOR Setup at CERN
  • Disk servers
  • 140 disk servers
  • 70 TB of staging pools
  • 40 stagers
  • Tape drives and servers
  • Libraries
  • 2 sets of 5 Powderhorn silos (2 x 27500
    cartridges)
  • 1 Timberwolf (1 x 600 cartridges)
  • 1 L700 (1 x 600 cartridges)

Model Nb Drives Nb Servers
9940B 21 20
9940A 28 10
9840 15 5
3590 4 2
DLT7000 6 2
LTO 6 3
SDLT 2 1
10
Evolution of Data in CASTOR
11
Tape Mounts per group
12
Tape Mounts per drive type
13
Tape Mounts per drive type
14
ALICE Data Challenge
  • Migration rate of 300 MB/s sustained for a week
  • Using 18 STK T9940B drives
  • 20 disk servers managed by 1 stager
  • A separate name server was used for the data
    challenge
  • See presentation of Roberto Divia
Write a Comment
User Comments (0)
About PowerShow.com