OO Software and Data Handling in AMS - PowerPoint PPT Presentation

1 / 19
About This Presentation
Title:

OO Software and Data Handling in AMS

Description:

archiving. External Communications. Science. Operations. Center. XTerm. HOSC Web Server and xterm ... cmds archive. AMS Data, NASA data, metadata. A.Klimentov ... – PowerPoint PPT presentation

Number of Views:39
Avg rating:3.0/5.0
Slides: 20
Provided by: amsC3
Category:

less

Transcript and Presenter's Notes

Title: OO Software and Data Handling in AMS


1
OO Software and Data Handling in AMS
Vitali Choutko, Alexei Klimentov
MIT, ETHZ
  • Computing in High Energy and Nuclear Physics
  • Beijing , September 3-7 , 2001

2
Outline
  • AMS particle physics experiment on the
    international space station
  • Data flow and AMS ground centers
  • Software development
  • Conditions and Tag Database
  • Data Processing
  • AMS Detector
  • STS91 precursor flight
  • AMS ISS mission

3
AMS a particle physics experiment in space
PHYSICS GOALS
Accurate, high statistics measurements of
charged, cosmic ray spectra in space gt 0.1GV
  • The study of dark matter (90 ?)


Nuclei and e- spectra measurement
  • Determination of the existence or absence of
    antimatter in the Universe

Look for negative nuclei
  • The study of the origin and composition of
    cosmic rays

Measure isotopes D, He, Li, Be
4
(No Transcript)
5
Precursor flight
8
10 events recorded Trigger rates 0.1-1kHz DAQ
lifetime 90
Results
_
  • Anti-matter search
  • He / He 1.1x 10
  • Charged Cosmic Ray spectra
  • Pr, D, e- , He, N
  • Geomagnetic effects on CR
  • under/over geomagnetic cutoff components

-6

Magnet Nd2Fe14B TOF trigger, velocity and
Z Si Tracker charge sign, rigidity, Z Aerogel
Threshold Cerenkov velocity Anticounters
reject multi particle events
6
(No Transcript)
7
AMS on ISS , 3 years in space
_

Separate e- from p,p
up to 300 GeV
3
4
He, He, B, C

e-,? up to 1000 GeV
8
(No Transcript)
9
ISS to Remote AMS Centers Data Flow
White Sand, NM facility
AMS
Real-time Dump data
Real-time, Dump, White Sands
LOR playback
Payload Operations Control Center
MSFC, Al
Monitoring science data
Real-time dump
Payload Data Service system
HS Monitoring Science Flight ancillary data
Real-time Data HS
ACOP
External Communications
Short Term
GSE
Stored data
Long Term
NearReal-time
Science Operations Center
File transfer
High Rate Frame MUX
playback
NASA Ground Infrastructure
Telescience centers
Remote AMS Sites
ISS
10
RT data Commanding Monitoring NRT Analysis
POIC_at_MSFC AL
POCC
POCC
HOSC Web Server and xterm
XTerm
commands
Monitoring, HS data Flight Ancillary data AMS
science data (selected)
cmds archive
TReK WS
Science Operations Center
Science Operations Center
voiceloop
TReK WS
Video distribution
External Communications
PC Farm
GSE
GSE
NRT Data Processing Primary storage
Archiving Distribution Science Analysis
AMS Data, NASA data, metadata
Buffer data Retransmit To SOC
Production Farm
GSE
MC production
D S A e T r A v e r
AMS Remote center
Analysis Facilities
Data Server
MC production Data mirror archiving
Analysis Facilities
AMS Station
AMS Station
AMS Station
AMS Ground Centers
11
AMS SW development
  • Been started mid 1996
  • basic decisions
  • new code C only (though we had a large part of
    legacy SW written on Fortran)
  • Existing libraries (CERNLIB, Geant, etc)
    incorporated via C/Fortran interface (R.Burow)
  • transient and persistent classes are separated
    with implementing of copy member functions
  • Decide to use Root and HBOOK for histogramming
    and data visualization

12
AMS SW development (contd)
  • Use different persistency solutions for various
    type of data
  • Flat files for the raw data
  • Ntuples and Root files for ESD
  • Relational Database (Oracle) tables for file
    catalogues
  • Relational Database (Oracle) Objectivity up to
    Sep 1998
  • Event Tags
  • Calibration data
  • Slow control data
  • NASA ancillary data
  • Various catalogues (processing history, etc)

13
Tag Storage with Oracle RDBMS
  • Tag is an unsigned 32 bit integer containing
    16, 1 to 5 bit long parameters such as charge,
    momentum sign, ß,
  • Model
  • Query retrieve tags with 3 parameters satisfied
    to the given limits (query taken from the real
    analysis chain)
  • Data stored on Raid array connected to AS4100
    (quad-CPU rated at 600MHz, 2GB RAM)
  • Flat files 2400 files, one file per DAQ run,
    tags are stored as an array of unsigned int.
  • RootN - 10 files, each file with 240 trees,
    one tree per DAQ run with single branch (tag) per
    tree
  • RootS - 10 files, each file with 240
    trees, one tree per DAQ run , having 16
    branches,
  • every parameter stored in a
    dedicated branch
  • OracleN - table with 10 partitions and 1
    column, mapping tag to a column
  • OracleI - table with 10 partitions and 1
    column with 16 bitmap indices, mapping tag to a
    column
  • OracleS table with 10 partitions and 16
    columns, every parameter mapping to a column

14
Oracle RDBMS to store AMS tags
1)
1) 500 sec to build indices for 100M tags
15
Design of the Conditions Database
  • Collection of Time Dependent Values (TDVs)
  • Primary access keys name, id, validity interval
  • Secondary key insert time
  • Major Components table of names and ids,
    default TDVs, TDVs
  • Applications
  • Loading data into
    database
  • Fetching conditions
    during event reconstruction
  • Management utilities (TDV
    browser)
  • Name, id
  • Validity begin, validity end time
  • Insert time
  • Array of unsigned integers (size 100 byte 8
    Mbyte)

16
AMS Conditions Database
  • Initially Objectivity, then flat files, now
    Oracle
  • Performance test for
  • TOF temperature (many short records)
  • Tracker pedestals (small amount of large records)

(a) BLOB array is stored inside the table, (b) -
outside
17
Oracle RDBMS to store Tags and TDVs
  • Currently 8 Gbyte is stored in the Conditions DB
    (115 different TDV types)
  • 100 million event tags are stored in Tag DB
  • Oracle RDBMS performance and functionality
    satisfy AMS requirements. Using of bitmap indices
    for tags improves query time dramatically.
  • The current implementation works with distributed
    CORBA technology. It allowes to reduce the number
    of database clients and machine loading.

18
Nominal Tables
Hosts, Interfaces Producers, Servers
II
I
VI
  • I submit 1st server
  • II cold start
  • III read active tables (available hosts,
    number of servers, producers, jobs/host)
  • IV submit servers
  • V get runinfo (runs to be processed, ESD
    output path)
  • VI submit producers (LILO, LIRO,RIRO)
  • Notify servers

server
server
producers
III
ESD

Active Tables Hosts, Interfaces, Producers,
Servers
IV
ESD
Raw data
server
server
V
VI
Conditions DB
server
server

server
Tag DB
server
producers
server
ESD
Catalogues
ESD
ESD
Oracle RDBMS
Raw data
AMS Production
19
AMS Production Highlights
  • Stable running for more than 1 month
  • Average efficiency 95 (98 without Oracle)
  • Processes communication and control via Corba
  • LSF for process submission
  • Run Oracle server on AS4100 Alpha and Oracle
    clients on Linux.
  • Oracle RDBMS
  • Tag DB with 100M entries
  • Conditions DB with 100K entries
  • Bookkeeping
  • Production status
  • Runs history
  • File catalogues
Write a Comment
User Comments (0)
About PowerShow.com