The Architecture of the ZEUS Micro Vertex Detector DAQ and Second Level Global Track Trigger - PowerPoint PPT Presentation

About This Presentation
Title:

The Architecture of the ZEUS Micro Vertex Detector DAQ and Second Level Global Track Trigger

Description:

Title: MVDDAQ and GTT Author: A.Polini Last modified by: Administrator Created Date: 4/4/2001 7:36:52 PM Document presentation format: A4 Paper (210x297 mm) – PowerPoint PPT presentation

Number of Views:51
Avg rating:3.0/5.0
Slides: 21
Provided by: A503
Category:

less

Transcript and Presenter's Notes

Title: The Architecture of the ZEUS Micro Vertex Detector DAQ and Second Level Global Track Trigger


1
The Architecture of the ZEUS Micro Vertex
Detector DAQ and Second Level Global Track Trigger
Alessandro Polini DESY/Bonn
2
Outline
  • The ZEUS Silicon Micro Vertex Detector
  • ZEUS Experiment Environment and Requirements
  • DAQ and Control Description
  • The Global Track Trigger
  • Performance and first experience with real data
  • Summary and Outlook

3
Detector Layout
Forward Section 410 mm
Barrel Section 622 mm
e
p
27.5 GeV
920 GeV
The forward section consists of 4 wheels
with 28 wedged silicon sensors/layer
providing r-? information.
The Barrel section provides 3 layers of support
frames (ladders) which hold 5 full modules, 600
square sensors in total, providing r-? and r-z
space points.
4
The ZEUS Detector
107 Hz
CTD Front End
CAL Front End
Other Components
CAL FLT
CTD FLT
p
5?s pipeline
5?s pipeline
Global First Level Trigger
0.7 ?s
920 GeV
GSLT Accept/Reject
500Hz
e
Other Components
CTD SLT
CAL SLT
Event Buffers
10 ms
Event Buffers
27.5 GeV
Global Second Level Trigger
GSLT Accept/Reject
40Hz
CAL
CTD
Event Builder
Third Level Trigger
bunch crossing interval 96 ns
cpu
cpu
cpu
cpu
cpu
cpu
ZEUS 3-Level Trigger System (Rate ?500Hz?40?5 Hz)
5Hz
Offline Tape
5
MVD DAQ and Trigger Design
  • ZEUS experiment designed by end of 80s
  • First high rate (96 ns) pipelined system
  • With a flexible 3 level trigger
  • Main building blocks were transputers (20Mbit/s)
  • 10 years later the MVD
  • 208.000 analog channels
  • MVD available for triggering from 2nd level
    trigger on
  • DAQ Design Choice
  • Use off-the-shelf products whenever possible
  • VME embedded systems for readout
  • ? Priority scheduling absolutely needed
  • ? LynxOS (Real Time OS)
  • Commercial Fast/Ethernet Gigabit Network
  • Linux PC for data processing

6
Detector Front-end
  • Front-end Chip HELIX 3.0
  • 128 channel analog pipelined programmable readout
    system specifically designed for the HERA
    environment.
  • Highly programmable for wide and flexible usage
  • ENCe ? 400 40CpF (no radiation damage
    included, S/N 13).
  • Data read-out and multiplexed (96ns) over the
    analog output
  • Internal Test Pulse and Failsafe Token Ring (8
    chips) capability
  • Uni. Heidelberg Nim A447, 89 (2000)

Front-end Hybrid
Silicon Sensors
7
The ADC Modules and the Readout
  • Read-out
  • Custom made ADC Modules
  • 9u VME board private bus extensions
  • 8 detector modules per board (8000 channels)
  • 10 bit resolution
  • Common Mode, Pedestal and Noise Subtraction
  • Strip Clustering
  • 2 separate data buffers
  • cluster data (for trigger purposes)
  • raw/strip data for accepted events.
  • Design event data sizes
  • Max. raw data size 1.5 MB event (208.000 ch)
  • Strip data Noise threshold 3 sigma (15 KB)
  • Cluster data 8 KB
  • Kek Tokyo Nim A436,281 (1999)

8
VME Data Gathering and Control
  • Data gathering and readout control using LynxOS
    3.01 Real Time OS on
  • network booted Motorola MVME2400/MV2700 PPC VME
    Computers
  • VME functionalities using developed VME
    driver/library uvmelib
  • multiuser VME access, contiguous memory mapping
    and DMA transfers,
  • VME interrupt handling and process
    synchronization
  • System interrupt driven (data transfer on ADCM
    data ready via DMA)
  • Custom designed VME all purpose Latency clock
    interrupt board
  • Full DAQ wide latency measurement
    system
  • Data transfer over Fast Ethernet/GigaBit network
    using TCPIP connections
  • Data trasfer as binary stream
  • with an XDR header
  • data file playback capability

9
UVMElib Software Package
  • Exploit the Tundra UniverseII VME bridge features
  • 8 independent windows for R/W access to the VME
  • Flexible Interrupt and DMA capabilities
  • Library layered on an enhanced uvmedriver
  • Mapping of VME addresses AND contiguous PCI RAM
    segments
  • Each window (8 VME, N PCI) addressed
  • by a kernel uvme_smem_t structure
  • Performance
  • 18MB/s DMA transfer on std VMEbus
  • Less than 50us response on VME IRQ
  • Additional Features
  • Flexible interrupt usage via global system
    semaphores
  • Additional Semaphores for process synchronization
  • DMA PCI VME trasfer queuing
  • Easy system monitoring via semaphores status and
    counters

typedef struct int id unsigned
mode int size unsigned
physical unsigned virtual char
name20 uvme_smem_t
Addressing Mode A32D32, A24D32, A16D16
SYS,USR
For DMA transfer
For R/W operations
Symbolic Mapping
http//mvddaq.desy.de/
10
Interfaces to the ZEUS existing environment
  • The ZEUS Experiment is based on Transputer
  • Interfaces for data gathering from other
  • detectors using and connection to existing
  • Global Second Level Trigger using ZEUS
  • 2TP modules
  • All newer components interfaces using
  • Fast/Gbit Ethernet
  • VME TP connections planned to be upgraded to
  • Linux PC PCI-TP interface
  • Nikhef NIM A332, 263 (1993)

11
The Global Tracking Trigger
  • Trigger Requirements
  • Higher quality track reconstruction and rate
    reduction
  • Z vertex resolution 9 cm (CTD only) ? 400 ?m
    (MVDCTDGTT)
  • Decision required within existing SLT (lt15 ms)
  • Development path
  • MVD participation in GFLT not feasible, readout
    latency too large.
  • Participation at GSLT possible
  • Tests of pushing ADC data over FastEthernet give
    acceptable rates/latencies performance.
  • But track and vertex information poor due to low
    number of planes.
  • Expand scope to interface data from other
    tracking detectors
  • Initially Central Tracking Detector (CTD) -
    overlap with barrel detectors
  • Later Straw Tube Tracker (STT) - overlap with
    wheels detectors.
  • Implement GTT as a PC farm with TCP data and
    control path

Dijet MC event
12
The MVD Data Acquisition System and GTT
Forward Tracking, Straw Tube Tracker Read-out
Central Tracking Detector Read-out
Analog Data
MVD HELIX Front-End Patch-Boxes
Global First Level Trigger,Busy, Error
MVD VME Readout
NIM Latency
AnalogLinks
AnalogLinks
AnalogLinks
NIM Latency
NIM Latency
NIM Latency
NIM Latency
Clock Control
Clock Control
Clock Control
Lynx OS CPU
Lynx OS CPU
Lynx OS CPU
Lynx OS CPU
Lynx OS CPU
ADCM modules
ADCM modules
ADCM modules
Lynx OS CPU
CTD 2TP modules
STT 2TP module
VME (CC Slave) Crate 1 (MVD bottom)
VME (CC Slave) Crate 2 (MVD forward)
VME (CC Master) Crate 0 (MVD top)
VME TP connection Data from CTD
VME TP connection Data from STT
VME HELIX Driver Crate
NIM Latency
Lynx OS CPU
GSLT 2TP modules
GTT Control Fan-out
Global Tracking Trigger
Processors (GFLT rate 800 Hz)
TP connection to the Global Second Level Trigger
ZEUS Run Control and Online Monitoring Environment
Global Second Level Trigger Decision
Fast Ethernet/ Gigabit Network
NIM Latency
Lynx OS CPU
Slow control Latency Clock modules
VME CPU Boot Server and Control
Network Connection to the ZEUS Event Builder
(100 Hz)
Main MVDDAQ server, Local Control, Event-Builder
Interface
13
GTT hardware
  • Implementation
  • MVD readout
  • 3 Motorola MVME2400 450MHz
  • CTD/STT interfaces
  • NIKHEF-2TP VME-Transputer
  • Motorola MVME2400 450MHz
  • PC farm
  • 12 DELL PowerEdge 4400 Dual 1GHz
  • GTT/GSLT result interface
  • Motorola MVME2700 367MHz
  • GSLT/EVB trigger result interface
  • DELL PowerEdge 4400 Dual 1GHz
  • DELL Poweredge 6450 Quad 700 MHz
  • Network switches
  • 3 Intel Express 480T Fast/Giga 16 ports
  • Thanks to Intel Corp. who provided
    high-performance switch and PowerEdge hardware
    via Yale grant.

CTD/STT interface
MVD readout
PC farm and switches
GTT/GSLT interface
EVB/GSLT result interface
14
GTT Algorithm Description
  • Modular Algorithm Design
  • Two concurrent algorithms
  • (Barrel/Forward) foreseen
  • Process one event per host
  • multithreaded event processing
  • data unpacking
  • concurrent algorithms
  • time-out
  • Test and Simulation results
  • 10 computing hosts required
  • Control Credit distribution not
    Round-Robin

At present barrel algorithm implemented Forward
algorithm in development phase
15
Barrel algorithm description
  • Find tracks in the CTD, extrapolate into the MVD
  • to resolve pattern recognition ambiguity
  • Find segments in Axial and Stereo layers of CTD
  • Match Axial Segments to get r-? tracks
  • Match MVD r-? hits
  • Refit r-? track including MVD r-? hits
  • After finding 2-D tracks in r-?, look for 3-D
  • tracks in z-axial track length,s
  • Match stereo segments to track in r-? to get
    position for z-s fit
  • Extrapolation to inner CTD layers
  • If available use coarse MVD wafer position to
    guide extrapolation
  • Match MVD z hits
  • Refit z-s track including z hits
  • Constrained or unconstrained fit
  • Pattern recognition better with constrained
    tracks
  • Secondary vertices require unconstrained tracks
  • Unconstrained track refit after MVD hits have
    been matched

16
First tracking results
GTT event display
Run 42314 Event 938
Physics data vertex distribution
Dijet Montecarlo Vertex Resolution
Collimator C5 Proton-beamgas interaction
Nominal Vertex
Run 44569 Vertex Distribution
mm
Resolution including MVD from MC 400 µm
17
First tracking results
GTT event display
Yet another background event
Physics data vertex distribution
Dijet Montecarlo Vertex Resolution
Collimator C5 Proton-beamgas interaction
Nominal Vertex
Run 44569 Vertex Distribution
mm
Resolution including MVD from MC 400 µm
18
First GTT latency results
ms
ms
ms
ms
CTD VME readout latency with respect to MVD
MVD VME SLT readout latency
GTT latency after complete trigger processing
MVD-GTT Latency as measured by GSLT
  • 2002 HERA running, after lumi upgrade compromized
    by high background rates
  • Mean datasizes from CTD and MVD much larger than
    the design
  • Sept 2002 runs used to tune datasize cuts
  • Allowed GTT to run with acceptable mean latency
    and tails at the GSLT
  • Design rate of 500 Hz appears possible

Mean GTT latency vs GFLT rate per run
Low data occupancy rate tests
Hz
Montecarlo
HERA
ms
19
MVD Slow Control
  • CANbus is the principle fieldbus used
  • 2 ESD CAN-PCI/331 dual CANbus adapter in 2 linux
    PCs
  • Each SC sub-system uses a dedicated CANbus
  • Silicon detector/radiation monitor bias voltage
  • 30 ISEG EHQ F0025p 16 channel supply boards
    4 ISEG ECH 238L UPS 6U EURO crates
  • Front-end Hybrid low voltage
  • Custom implementation based on the ZEUS LPS
    detector supplies (INFN TO)
  • Cooling and Temperature monitor
  • Custom NIKHEF SPICan Box
  • Interlock System
  • FrenzelBerg EASY-30 CAN/SPS
  • MVD slow control operation
  • Channel monitoring performed typically every 30s
  • CAN emergency messages are implimented
  • SC wrong state disables the experiments trigger
  • CERN Root based tools are used when operator
    control and monitoring is required

http//www.esd.electronic http//www.iseg-hv.c
om http//www.nikhef.nl/user/n48/zeus_doc.html
//www.frenzel-berg.de/produkte/easy.html
20
Summary and Outlook
  • The MVD and GTT system have been successfully
    integrated into the ZEUS experiment
  • 267 runs with 3.1Mio events recorded between
    31/10/02 and 18/02/03 with MVD on and DQM (
    700 nb-1)
  • The MVD DAQ and GTT architecture, built as a
    synthesis of custom solutions and
    common-off-the-shelf equipment (real Time OS
    Linux PC Gigabit Network), works reliably
  • The MVD DAQ and GTT performance (latency,
    throughput and stability) are satisfactory
  • Next steps
  • Enable utilization of barrel algorithm result at
    the GSLT
  • Finalize development and integration of the
    forward algorithm
  • So far very encouraging results. Looking
    forward to routine high
  • luminosity data taking. The shutdown
    ends in June 2003...
Write a Comment
User Comments (0)
About PowerShow.com