The LHCb Experiment is a hadronic precision experiment at the LHC accelerator aimed at mainly studying b-physics by profiting from the large b-anti-b-production at LHC. The challenge of high trigger ...efficiency has driven the choice of a readout architecture allowing the main event filtering to be performed by a software trigger with access to all detector information on a processing farm based on commercial multi-core PCs. The readout architecture therefore features only a relatively relaxed hardware trigger with a fixed and short latency accepting events at 1 MHz out of a nominal proton collision rate of 30 MHz, and high bandwidth with event fragment assembly over Gigabit Ethernet. A fast central system performs the entire synchronization, event labelling and control of the readout, as well as event management including destination control, dynamic load balancing of the readout network and the farm, and handling of special events for calibrations and luminosity measurements. The event filter farm processes the events in parallel and reduces the physics event rate to about 2 kHz which are formatted and written to disk before transfer to the offline processing. A spy mechanism allows processing and reconstructing a fraction of the events for online quality checking. In addition a 5 Hz subset of the events are sent as express stream to offline for checking calibrations and software before launching the full offline processing on the main event stream.
LHCb is one of the 4 experiments at the LHC accelerator at CERN. LHCb has approximately 1500 PCs for processing the High Level Trigger (HLT) during physics data acquisition. During periods when data ...acquisition is not required or the resources needed for data acquisition are reduced most of these PCs are idle or very little used. In these periods it is possible to profit from the unused processing capacity to reprocess earlier datasets with the newest applications (code and calibration constants), thus reducing the CPU capacity needed on the Grid. The offline computing environment is based on LHCbDIRAC (Distributed Infrastructure with Remote Agent Control) to process physics data on the Grid. In DIRAC, agents are started on Worker Nodes, pull available jobs from the DIRAC central WMS (Workload Management System) and process them on the available resources. A Control System was developed which is able to launch, control and monitor the agents for the offline data processing on the HLT Farm. It can do so without overwhelming the offline resources (e.g. DBs) and in case of change of the accelerator planning it can easily return the used resources for online purposes. This control system is based on the existing Online System Control infrastructure, the PVSS SCADA and the FSM toolkit.
The High Level Trigger (HLT) and Data Acquisition (DAQ) system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are consolidated into files on an onsite storage ...and then sent to permanent storage for subsequent analysis on the Grid. For local and full-chain tests a method to exercise the data-flow through the High Level Trigger when there are no actual data is needed. In order to test the system as much as possible under identical conditions as for data-taking the solution is to inject data at the input of the HLT at a minimum rate of 2 kHz. This is done via a software implementation of the trigger system which sends data to the HLT. The application has to simulate that the data it sends come from real LHCb readout-boards. Both simulation data and previously recorded real data can be re-played through the system in this manner. As the data rate is high (100 MB/s), care has been taken to optimise the emulator for throughput from the Storage Area Network (SAN). The emulator can be run in stand-alone mode or run as a pseudo-subdetector of LHCb, allowing for use of all the standard run-control tools. The architecture, implementation and performance of the emulator will be presented.
This paper focuses on the management of raw event data files containing the electronic response to particle collisions in the LHCb detector at the LHC storage ring at CERN. The typical file life ...cycle is presented, starting from writing of raw event streams into files to transfer to the CERN tape storage system and to the offline GRID distributed processing infrastructure. A distributed solution centered on a dedicated database is implemented to address the raw event data management needs. The solution is integrated with the data acquisition system software chain and with the experiment control system. The software architecture and hardware configurations are explained together with the testing methods, current development status and future plans.
A modular power supply system is described, for use in the Ring Imaging Cherenkov Detectors of the LHCb experiment. The main characteristics of the supply are very good time stability and voltage ...resolution, full programmability, floating supplies, self protection and remote control. The choice of a commercial HV module with standard control inputs allows easy customisation to adapt our design to different requirements. The realisation shown here supplies voltages up to 20
kV with a maximum current of 0.5
mA and up to 32 channels on a single crate.
We report on measurements performed to test the reliability of high rate data transmission over copper Gigabit Ethernet for the LHCb online system. High reliability of such transmissions will be ...crucial for the functioning of the software trigger layers of the LHCb experiment, at the CERN's LHC accelerator. The technological challenge in the system implementation consists of handling the expected high data throughput of event fragments using, to a large extent, commodity equipment. We report on performance evaluations (throughput, error rates and frame drop) of the main components involved in data transmission: the Ethernet cable, the PCI bus and the operating system (the latest kernel versions of Linux). Three different platforms have been used.
The LHCb experiment event-building is performed over a Gigabit Ethernet switched network. One specific step of event-building is implemented by a software running on a gateway PC whose role is to ...gather data packets from data sources, rebuild events and forward them to computing nodes for running trigger algorithms. In this article, we concentrate on the implementation of this component on a Linux system. While implementing the software, we made thorough studies of the kernel and profiled applications, leading to significant performance improvement. More importantly, these studies allowed us to also gain in terms of predictability thanks to a good understanding of the whole system. In this article, we use this application to illustrate possible improvements to system software for data acquisition. We describe in detail implementation choices and related operating system kernel code. These techniques and observations are generic enough to be applied to other similar systems.
The LHCb experiment at CERN has a large number of custom electronics boards performing high-speed data-processing. As in any large experiment the control and monitoring of these crate-mounted boards ...must be integrated into the overall control-system. Traditionally this has been done by using buses on the back-plane of the crates, such as VME. LHCb has chosen to equip every board with an embedded micro-controller and connecting them in a large Local Area Network. The intelligence of these devices allows complex (soft) real-time control and monitoring, required for modern field programmable gate array (FPGA) driven electronics. Moreover, each board has its own, isolated control access path, which increases the robustness of the entire system. The system is now in pre-production at several sites and will go into full production during next year. The hardware and software will be discussed and experience from the R&D and pre-production will be reviewed, with an emphasis on the advantages and difficulties of this approach to board-control.
The LHCb high level trigger infrastructure Frank, M; Gaspar, C; Herwijnen, E v ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The High Level Trigger and Data Acquisition system of the LHCb experiment at the CERN Large Hadron Collider must handle proton-proton collisions from beams crossing at 40 MHz. After a hardware-based ...first level trigger events have to be processed at the rate of 1 MHz and filtered by purely software-based trigger applications executing in a high level trigger farm consisting of up to 2000 CPUs built of commodity hardware (HLT). The final rate of accepted events is around 2 kHz. This contribution describes the architecture used to host the selection algorithms of the high level trigger on each trigger node, which is based on shared memory event buffers. It illustrates the interplay between event building processes, event filter processes and processes sending accepted events to the storage system. It describes these software components that are based on the Gaudi event processing framework.