Title: TELL1 A common data acquisition board for LHCb
1TELL1A common data acquisition board for LHCb
Guido Haefeli, University of Lausanne
2Outline
- LHCb readout scheme
- LHCb data acquisition
- Optical links
- Event building network
- Common readout requirements
- Trigger rates
- Buffers
- Bandwidth
- Data flow on the board
- Synchronization
- Level 1 trigger pre-processing and
zero-suppression - Higher level trigger processing
- Gigabit Ethernet interface
- Board implementation
- FPGAs
- Level 1 buffer
- Higher level trigger multi event packet buffer
- Summary
3LHCb trigger system
- L0 fully synchronous and pipelined fixed latency
- Pile-Up
- Calorimeter
- Muon
- L1 software trigger with maximal latency
- VELO
- TT
- (Outer Tracker)
- HLT software trigger
- Access to all sub-detectors
4LHCb data acquisition
Front End of detectors in cavern
60-100m
5Optical link implementation
6Event building network
HLTTraffic 40KHz MEP ? /16 Mux ? x8
Level-1Traffic 1.11MHz MEP ?/32 Mux
?x2 SFC ?/94
Front-end Electronics
FE
FE
FE
FE
FE
FE
FE
FE
FE
FE
TRM
FE
FE
Multiplexing Layer
Readout Network
L1-Decision
Sorter
94 Links 7.1 GB/s
94 SFCs
CPUFarm
1800 CPUs
7Trigger rates and buffering
- Max. L0 Accept rate 1.11 MHz
- Max. L1 Accept rate 40 KHz
- L0 buffer is implemented on the Front End fixed
to be 160 clock cycles ! - L1 buffer 58254 events which equals to 52.4us !
8Bandwidth requirements
- Input data bandwidth for a 24 optical link
motherboard - Optical receiver 24 fibres x 1.28 Gbit/s ?
30.7Gbit/s - Analog receiver 64 channels x 10-bit _at_ 40 MHz ?
25.6 Gbit/s - L1 Buffer
- Write data bandwidth 30.7 Gbit/s
- Read data bandwidth 4 Gbit/s
- DAQ links
- 4 Gigabit Ethernet links
- ECS
- 10/100 Ethernet for remote control
9A bit of history
- L1 trigger scheme changed
- During the last year the maximal L1T latency has
increased from 1.8ms to 52ms (x32). This forces
the change SRAM FIFO ? SDRAM - Detectors added to L1T (TT, OT) and potentially
others - Decreasing cost for the optical links ? data
processing is done in the counting room - More and more functionalities on the read out
board because no Readout Unit and no Network
Processors! - The event fragments are packed in so called
Multi Event Packets MEP to optimize ethernet
packet size and packet rate. - The acquisition board adds IP destination, does
Ethernet framing and transmit data buffering )
10How can we make a common useful read out ?
FE
FE
FE
FE
- Adaptation to two link system is possible with
receiver mezzanine cards.
A-RxCard
A-RxCard
O-RxCard
PP-FPGA
PP-FPGA
PP-FPGA
PP-FPGA
FPGAs allow the adaptation for different data
processing.
L1B
L1B
L1B
L1B
SyncLink-FPGA
Sufficient bandwidth for the entire acquisition
path
FEM
RO-Tx
TTCrx
ECS
Mezzanine card for detector specific needs
ECS
L0 and L1 Throttle
HLT
L1T
TTC
11Advantages being common !
FE
FE
FE
FE
Solution and consents finding for new system
requirements much easier.
A-RxCard
A-RxCard
O-RxCard
PP-FPGA
PP-FPGA
PP-FPGA
PP-FPGA
Cost reduction due bigger quantity serial
production (300 boards for LHCb).
L1B
L1B
L1B
L1B
SyncLink-FPGA
Reduce maintenance cost with a single system.
FEM
RO-Tx
TTCrx
ECS
ECS
L0 and L1 Throttle
HLT
L1T
TTC
Common software interfaces.
12L1T dataflow
X12
SyncLink-FPGA
PP-FPGA
O-RxCard
PP-FPGA
PP-FPGA
PP-FPGA
O-RxCard (Mezzanine card)
X12
13HLT dataflow
0.9us/event
20us/event
320us/MEP
SyncLink Stratix 1S25, 25K LE
PP-FPGA, Altera Stratix 1S20, 18K LE
X12
FIFO
O-RxCard
TTC broadcast
PP-FPGA
FIFO
ECS
PP-FPGA
HLT IP RAM
HLT DEST FIFO
HLT ZeroSupp Event Encaps.
FIFO
FIFO
HLT Framer
RO-Interface POS-Level 3
X12
HLT MEP Buffer
O-RxCard (Mezzanine card)
Data link
FIFO
FIFO
L1B
4 Kbyte
Shared data path for 2 channel RO-TxCard _at_ 100
MHz
1 Kbyte
1 Kbyte
Sync
FIFO
ID Check
FIFO
Sync
FIFO
1 Mbyte external QDR SRAM _at_ 100 MHz
Link DDR _at_80MHz
_at_80MHz
_at_120MHz
_at_120MHz
64 KEvent DDR SDRAM
14Prototyping
- Motherboard specification, schematics and layout
is finished - Daughtercard design
- Pattern generator card (available)
- 12 way optical receiver card (design finished)
- RO-TxCard is implemented as a dual Gigabit
ethernet (see talk from Hans in this session) - CCPC and GlueCard for ECS
- Test system
- PCI based data generator card
- Gigabit ethernet connection to PC
15Technology used
- FPGA
- Stratix 1S20 , 780-pin FBGA
- Stratix 1S25 , 1020-pin FBGA
- Main features used
- Memory blocks 512Kbit,4Kbit and 512bit
- DDR SDRAM interface (dedicated circuit)
- DDR I/Os
- Terminator technology for serial and parallel
termination on chip. - DSP blocks for L1T pre-processing
- DDR SDRAM running at 240 MHz data transfer rate
(120 MHz clock) for L1B - QDR SRAM running at 100 MHz for HLT MEP buffer
- 12 layer PCB (50Ohm)
16(No Transcript)
17DDR bank data signal layout
1814cm
19Summary
- After evaluating different concepts for data
processing and acquisition a common read out
board for LHCb has been specified and designed. - It serves for 24-optical with a data transfer
rate of 1.28 Gbit/s each. - The board implements data identification, L1
buffering an zero suppression. - It is made for the use with standard Gigabit
ethernet equipment.