SHiP is a new proposed fixed-target experiment at the CERN SPS accelerator. The goal of the experiment is to search for hidden particles predicted by models of Hidden Sectors. The purpose of the SHiP ...Spectrometer Tracker is to reconstruct tracks of charged particles from the decay of neutral New Physics objects with high efficiency. The goal is to develop a method of pattern recognition based on the SHiP Spectrometer Tracker design.
We present a software architecture and framework that can be used to facilitate the development of data processing applications for High Energy Physics experiments. The development strategy follows ...an architecture-centric approach as a way of creating a resilient software framework that can withstand changes in requirements and technology over the long lifetimes of experiments. The software architecture, called GAUDI, supports event data processing applications that run in different processing environments, from the high level triggers in the on-line system to the final physics analysis. We present our major architectural design choices and outline the arguments that led to these choices. Several iterations of a software framework based on this architecture have been released and the framework is now being used by the physicists of the collaboration to facilitate the development of data processing algorithms. Object oriented technologies have been used throughout.
The high data rates at the LHC necessitate the use of biasing selections already at the trigger level. Consequently, the correction of the biases induced by these selections becomes one of the main ...challenges for analyses. This paper presents the LHCb implementation of a data driven method for extracting such biases which entirely avoids uncertainties associated with detector simulation. Its novelty lies in the LHCb trigger which is implemented entirely in software, allowing its decisions to be reproduced in an exact manner offline. It is demonstrated that this method allows the control of selection biases to better than 0.1%, and that it greatly enhances the range of physics which can be performed by the LHCb experiment. The implications of trigger and software architectures on the long term viability of this method, in particular with respect to the reproducibility of trigger decisions when running the same code on different underlying hardware or compilers, is discussed.
Today's computing elements for software based high level trigger processing (HLT) are based on nodes with multiple cores. Using process based parallelization to filter particle collisions from the ...LHCb experiment on such nodes leads to expensive consumption of memory and hence significant cost increase. In the following an approach is presented to both minimize the resource consumption of the filter applications and to reduce the startup time. Described is the duplication of threads and the handling of files open in read-write mode when duplicating filter processes and the possibility to bootstrap the event filter applications directly from preconfigured checkpoint files. This led to a reduced memory consumption of roughly 60% in the nodes of the LHCb HLT farm and an improved startup time of a factor 10.
The LHCb high level trigger infrastructure Frank, M; Gaspar, C; Herwijnen, E v ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The High Level Trigger and Data Acquisition system of the LHCb experiment at the CERN Large Hadron Collider must handle proton-proton collisions from beams crossing at 40 MHz. After a hardware-based ...first level trigger events have to be processed at the rate of 1 MHz and filtered by purely software-based trigger applications executing in a high level trigger farm consisting of up to 2000 CPUs built of commodity hardware (HLT). The final rate of accepted events is around 2 kHz. This contribution describes the architecture used to host the selection algorithms of the high level trigger on each trigger node, which is based on shared memory event buffers. It illustrates the interplay between event building processes, event filter processes and processes sending accepted events to the storage system. It describes these software components that are based on the Gaudi event processing framework.
The LHCb Experiment is a hadronic precision experiment at the LHC accelerator aimed at mainly studying b-physics by profiting from the large b-anti-b-production at LHC. The challenge of high trigger ...efficiency has driven the choice of a readout architecture allowing the main event filtering to be performed by a software trigger with access to all detector information on a processing farm based on commercial multi-core PCs. The readout architecture therefore features only a relatively relaxed hardware trigger with a fixed and short latency accepting events at 1 MHz out of a nominal proton collision rate of 30 MHz, and high bandwidth with event fragment assembly over Gigabit Ethernet. A fast central system performs the entire synchronization, event labelling and control of the readout, as well as event management including destination control, dynamic load balancing of the readout network and the farm, and handling of special events for calibrations and luminosity measurements. The event filter farm processes the events in parallel and reduces the physics event rate to about 2 kHz which are formatted and written to disk before transfer to the offline processing. A spy mechanism allows processing and reconstructing a fraction of the events for online quality checking. In addition a 5 Hz subset of the events are sent as express stream to offline for checking calibrations and software before launching the full offline processing on the main event stream.
The LHCb experiment at CERN will have an on-line trigger farm composed of up to 2000 PCs. In order to monitor and control each PC and to supervise the overall status of the farm, a farm monitoring ...and control system (FMC) was developed. The FMC is based on distributed information management (DIM) system as network communication layer, it is accessible both through a command line interface and through the Prozeszligvisualisierungs und Steuerungssystem (PVSS) graphical interface, and it is interfaced to the finite state machine (FSM) of the LHCb experiment control system (ECS) in order to manage anomalous farm conditions. The FMC is an integral part of the ECS, which is in charge of monitoring and controlling all on-line components; it uses the same tools (DIM, PVSS, FSM, etc.) to guarantee its complete integration and a coherent look and feel throughout the whole control system.
The High Level Trigger and Data Acquisition system selects about 2 kHz of events out of the 40 MHz of beam crossings. The selected events are sent to permanent storage for subsequent analysis. In ...order to ensure the quality of the collected data, identify possible malfunctions of the detector and perform calibration and alignment checks, a small fraction of the accepted events is sent to a monitoring farm, which consists of a few tens of general purpose processors. This contribution introduces the architecture of the data stream splitting mechanism from the storage system to the monitoring farm, where the raw data are analyzed by dedicated tasks. It describes the collaborating software components that are all based on the Gaudi event processing framework.
LHCb Online event processing and filtering Alessio, F; Barandela, C; Brarda, L ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The first level trigger of LHCb accepts one million events per second. After preprocessing in custom FPGA-based boards these events are distributed to a large farm of PC-servers using a high-speed ...Gigabit Ethernet network. Synchronisation and event management is achieved by the Timing and Trigger system of LHCb. Due to the complex nature of the selection of B-events, which are the main interest of LHCb, a full event-readout is required. Event processing on the servers is parallelised on an event basis. The reduction factor is typically 1/500. The remaining events are forwarded to a formatting layer, where the raw data files are formed and temporarily stored. A small part of the events is also forwarded to a dedicated farm for calibration and monitoring. The files are subsequently shipped to the CERN Tier0 facility for permanent storage and from there to the various Tier1 sites for reconstruction. In parallel files are used by various monitoring and calibration processes running within the LHCb Online system. The entire data-flow is controlled and configured by means of a SCADA system and several databases. After an overview of the LHCb data acquisition and its design principles this paper will emphasize the LHCb event filter system, which is now implemented using the final hardware and will be ready for data-taking for the LHC startup. Control, configuration and security aspects will also be discussed.