The%20LHCb%20Event%20Building%20Strategy - PowerPoint PPT Presentation

About This Presentation
Title:

The%20LHCb%20Event%20Building%20Strategy

Description:

Presentation at IEEE-NPSS Real Time 2001. June 4-8, 2001, Valencia, Spain. N. Neufeld ... Throttle to disable trigger in case of persisting contention ... – PowerPoint PPT presentation

Number of Views:22
Avg rating:3.0/5.0
Slides: 15
Provided by: nikone
Category:

less

Transcript and Presenter's Notes

Title: The%20LHCb%20Event%20Building%20Strategy


1
The LHCbEvent Building Strategy
  • Niko Neufeld
  • CERN, EP Division
  • Geneva, Switzerland
  • Presentation at IEEE-NPSS Real Time 2001
  • June 4-8, 2001, Valencia, Spain

2
Overview
  • Architecture of the LHCb DAQ
  • Trigger rates, event size
  • Event building requirements
  • Gigabit Ethernet for the Readout Network
  • Network topology
  • Commercial switches
  • Small modules

3
Main Architecture
  • Data are pushed through from the Front-end links
    to the CPU farm
  • No upwards communication
  • Throttle to disable trigger in case of persisting
    contention
  • Backpressure (Flow Control) to deal with local
    contention

LHC
b
Detector
Data
rates
VELO TRACK ECAL HCAL MUON RICH
40 MHz
40 TB/s
Level 0
1 MHz
Trigger
Level
-
0
Timing
L0
Fixed latency

Front
-
End Electronics
?
Fast
4.0
s
1 TB/s
L1
40 kHz
Level
-
1
Control
Level 1
LAN
Trigger
1 MHz
Front
-
End
Multiplexers
(FEM)
Front End Links
6 GB/s
Variable latency
lt1 ms
RU
RU
RU
Read
-
out units (RU)
Throttle
6 GB/s
Read
-
out Network (RN)
SFC
SFC
Sub
-
Farm Controllers (SFC)
Variable latency
Control
50 MB/s
L2 10 ms

CPU
CPU
Storage
L3 200 ms
Monitoring
Trigger Level 2 3
CPU
CPU
Event Filter
4
Event Building
  • Event Building consists of two main tasks
  • The fragments of an event, originating from many
    sources must be transported to one destination
    (through a network/bus)
  • The fragments must be arranged in the correct
    order as a contiguous event
  • Using general purpose or dedicated CPUs such as
    High End PCs, Network Processors, Smart NICs

5
Readout Network
  • Most likely choice for the Network Technology
    Gigabit Ethernet
  • Also studied Myrinet
  • Readout Network will be a rather large ( 128 x
    128) Switching Network
  • Must sustain at least 40 kHz of fragments 1000
    Bytes
  • Should provide enough margin to increase input
    rate to 100 kHz

6
Implementation of Gigabit Ethernet Switching
Network for Event Building
  • Conventional
  • Large Campus/MAN switches (e.g. Foundry Big Iron
    120 Gigabit Ethernet ports)
  • Alternative
  • Re-use of NP-based DAQ modules (? Presentation B.
    Jost)
  • Basic building block is a 4x4 programmable
    switch, giving full control and maximum
    flexibility (in particular for flow control)

7
Network TopologyThe 2 crucial questions
  • How to build a 128 x 128 network out of building
    blocks with n x n inputs
  • when n is small, e.g. 4
  • when n is big, e.g. 60
  • How to optimise the usage of the installed
    bandwidth, taking into account the direction of
    the dataflow in the DAQ system

8
Banyan Network From Large Switches
For a rate of 100 kHz
Max load on single link 60 MB/s (50) 84 MB/s (70)
of input- or output links 240 180
Maximum fragment size 625 875
15 links per connection
20 links per connection
9
Optimised Network From Large Switches
8 links per connection 240 x 240 ports
effective load 40 _at_100 kHz
11 links per connection 174 x 174 ports
effective load 72 _at_100 kHz
10
Banyan Network for 4x4 Modules
40 kHz 40 load on input 39 load on internal
inks 128 Modules needed 100 kHz 50 load on
input- 49 load on internal links 256(!) modules
needed
11
Alternative Topology for 4x4 Modules
x 6
6 modules fully connected make a 39 x 39 module
3 modules fully connected make a 9x9 module
12
Fully connected 125x128 network
  • Consists of five 39x39 modules
  • Load on input ports 40 _at_ 40 kHz
  • Load on internal links 34 _at_ 40 kHz
  • 90 4x4 modules needed in total

13
Nitty Gritty Connectivity
Cabling and setting up the routing tables
become an issue!
14
Conclusions
  • The LHCb Event Building will be done using a
    Gigabit Ethernet switching network
  • Event fragments will flow freely from the
    front-end links to the entry points of a CPU
    farm, without synchronisation
  • The switching network has a considerable size and
    cost
  • Optimised network topologies can take optimal
    advantage of the unidirectional data-flow
Write a Comment
User Comments (0)
About PowerShow.com