ALICE Detector Control Status Report - PowerPoint PPT Presentation

1 / 20
About This Presentation
Title:

ALICE Detector Control Status Report

Description:

The ALICE experiment at CERN. Organization of the controls activities ... Collected in URD (lightweight) and Overview Drawings. Through meetings and. workshops ... – PowerPoint PPT presentation

Number of Views:56
Avg rating:3.0/5.0
Slides: 21
Provided by: andraug
Category:

less

Transcript and Presenter's Notes

Title: ALICE Detector Control Status Report


1
ALICE Detector ControlStatus Report
  • A. Augustinus, P. Chochula, G. De Cataldo, L.
    Jirdén, S. Popescu
  • the DCS team, ALICE collaboration, CERN Geneva
    Switzerland

2
Outline
  • The ALICE experiment at CERN
  • Organization of the controls activities in ALICE
  • Design goals and strategy
  • DCS architecture
  • Key concepts
  • DCS infrastructure
  • Summary - Conclusion

3
Introduction
  • ALICE is one of the four LHC experiments
  • Located at point 2 of the LHC at CERN
  • 18 different sub-detectors, 2 magnets
  • Dedicated for heavy ion physics participate in
    pp
  • 1000 members,86 institutes, 29 countries

Located at point 2 of the LHC at CERN
4
Introduction
  • Many sub-detector teams have limited expertise in
    controls, especially in large scale experiments
  • ALICE Controls Coordination (ACC) team put strong
    emphasis on coordination andsupport
  • Joint COntrols Project (JCOP) is acollaboration
    between CERN and all LHC experiments to exploit
    communalities in the control systems

5
Design goals
  • DCS shall ensure safe and efficient operation
  • Intuitive, user friendly, automation
  • Many parallel and distributed developments
  • Modular, still coherent and homogeneous
  • Changing environment hardware and operation
  • Expandable, flexible
  • Operational outside datataking, safeguard
    equipment
  • Available, reliable
  • Large world-wide user community
  • Efficient and secure remote access
  • Data collected by DCS shall be available for
    offline analysis of physics data

6
Strategy and methods
  • Common tools, components and solutions
  • Strong coordination within experiment (ACC)
  • Close collaboration with other experiments (JCOP)
  • In ALICE there are many similar sub-systems
  • Identify communalitiesthrough User Requirements
  • Collected in URD (lightweight)and Overview
    Drawings
  • Through meetings and workshops

7
Hardware Architecture
  • 3 layers supervisory, control, and field layer
  • Supervisory operator nodes, server nodes
  • Control worker nodes connecting to devices
  • Field devices, sensors and actuators
  • Reduce sharing of equipmentbetween sub-detectors
  • Standard hardware for computers
  • Limit diversity of devices in field layer
  • Dependent on sub-detector hardware
  • Use common hardware for similar tasks
  • General Purpose Monitoring System
  • Interlocks and DSS for protection of equipment
  • DSS is safe and reliable part of DCS

8
Software Architecture
  • A tree like structure representing
    sub-detectors, sub-systems and devices
  • Leaves (Device Units) drive devices
  • Nodes (Control Units) model and control sub-tree
    below
  • Commands flow down, states flow up the tree
  • Operation is done from the root node
  • Any sub-tree can be removed from tree and
    operated independently and concurrently
    partitioning
  • Behaviour and functionality of a control unit is
    modelled as a Finite State Machine (FSM)

9
Software Architecture
Commands
States and alarms
Each CU logicallycombines states anddistributes
commands
10
DCS key concepts
  • The FSM concept is fundamental to the DCS
  • Intuitive and generic method to model behaviour
    of a system or a device
  • An object has a well defined collection of states
  • Moves between states by executing actions
  • Triggered by an operator or an external event
  • DCS will interface to variety of Front End
    Electronics
  • Front End Device (FED) concept hides the
    implementation details through a common
    client-server interface (based on DIM)
  • Use common software tools
  • PVSSII, JCOP framework

11
Common solutions
  • Does not stop with selection of common tools and
    standard hardware
  • Define a standard behaviour for the same class of
    devices (e.g. HV power supplies)
  • Provide the sub-detectors with a standard state
    diagram
  • Define standard states/actions/operational
    sequences (automation) that can be used when
    defining behaviour of sub-detector
  • Guidelines for development, naming, numbering,
    look and feel of user interfaces etc.

12
DCS infrastructure
  • DCS needs adequate infrastructure (computers,
    network, ...)
  • Installation and maintenance of the network will
    be done by the CERN networking group (IT/CS)
  • All computers installed for the DCS will be
    procured, installed and maintained by a central
    team
  • Highly standardized hardware
  • Operation of network and computers will follow
    rules and guidelines and use tools from the
    Computing and Network Infrastructure for
    Controls (CNIC) working group

13
Network
  • The controls network will be a separate, well
    protected network
  • Without direct access from outside the
    experimental area
  • With remote access only through application
    gateways
  • With all equipment on secure power
  • A first estimate shows the need of around 350
    network connections, 2/3 in the experimental
    cavern
  • Not including 50 switches connecting 800
    embedded processors on the detector
  • Current installations use the CERN campus network
  • The controls network will be operational starting
    the 2nd quarter of 2006

14
Remote access
  • With the large world-wide user community remote
    access is an important aspect
  • Remote users will access PVSSII projects through
    a remote user interfaces via a Terminal Server
  • By default only observer rights, higher
    privileges can be granted to experts for
    specific, well defined tasks

15
Remote access
  • This strategy has been tested with 60 remote
    users simultaneously running a user interface
  • No degradation of performance (nor project, nor
    TS)
  • Tested successfully from several places around
    the world

16
Computers
  • 2U rack mounted PCs, in specially equipped racks
  • Cooling doors, power control, on secure power
  • Baseline operating system is Windows
  • Linux is used in specific cases
  • The DCS will be run as a large distributed PVSSII
    system
  • Based on several performance tests on large
    distributed systems
  • More detailed performance tests on several
    components are being performed

17
Computers
  • A first distribution of tasks across computing
    nodes has led to the need of 80-90 nodes
  • Including servers and system management nodes
  • Combining low resource demanding tasks
  • Maintaining separation between sub-detectors
  • A core DCS system has been installed this summer
  • 5 machines, to be used by sub-detectors for
    equipment test at first installations
  • More worker nodes and devices to be installed
    soon
  • 50 installed by 1st quarter 2006, rest 3rd
    quarter 2006

18
Further activities at site
  • The DSS is being commissioned
  • Experimental area surveillance
  • Interfaces to first gas systems and site
    infrastructure (CERN safety system, power
    control, environment monitoring, )are installed
    and made available to users
  • Will be extended gradually as the installation of
    the services (cooling, electricity, etc.)
    progress
  • Coordinated operation of the online systems (DAQ,
    Trigger, DCS) will start early 2006
  • Cosmic runs with TPC and other detectors

19
Summary
  • Many sub-detectors have implemented parts of
    their control system and used them in lab and
    beam tests
  • They could profit from the coordination and
    collaboration
  • Their very valuable feedback allowed us to
    optimise and improve the DCS design
  • The chosen architecture proved to be well adapted
    to the sub-detector needs
  • The process will continue with the first
    installations and this together with extensive
    performance tests will help us to further
    optimise and refine the system

20
Conclusion
  • Results so far make that we are confident that
    the ALICE Detector Control System will be fully
    operational at the beginning of 2007.Well in
    time to allow safe and efficient operation of the
    experiment to record first collisions at LHC
Write a Comment
User Comments (0)
About PowerShow.com