During the second half of 2006 a first combined test of almost all sub-systems of the Compact Muon Solenoid (CMS) experiment at CERN's new Large Hadron Collider has been performed. This test has been ...carried out in a surface assembly hall prior to the currently ongoing installation in the underground experimental cavern. Partial configurations of the CMS sub-detectors have been successfully interfaced with a scaled-down setup of the central Data Acquisition and Run Control systems, with the Level-1 Trigger and with the Detector Control and Safety Systems. The superconducting solenoid has been stably operated at its design field strength of 4 T. Several millions of events were reconstructed, stored and transferred to the Tier-0 computing centre and to remote sites. The present paper reports this first operation of CMS as an integrated system.
The proposed method is designed for a data acquisition system acquiring data from n independent sources. The data sources are supposed to produce fragments that together constitute some logical ...wholeness. These fragments are produced with the same frequency and in the same sequence. The discussed algorithm aims to balance the data dynamically between m logically autonomous processing units (consisting of computing nodes) in case of variation in their processing power which could be caused by some faults like failing computing nodes, or broken network connections. As a case study we consider the Data Acquisition System of the Compact Muon Solenoid Experiment at CERN's new Large Hadron Collider. The system acquires data from about 500 sources and combines them into full events. Each data source is expected to deliver event fragments of an average size of 2 kB with 100 kHz frequency. In this paper we present the results of applying proposed load metric and load communication pattern. Moreover, we discuss their impact on the algorithm's overall efficiency and scalability, as well as on fault tolerance of the whole system. We also propose a general concept of an algorithm that allows for choosing the destination processing unit in all source nodes asynchronously and asserts that all fragments of same logical data always go to same unit.
File-based data flow in the CMS Filter Farm Andre, J-M; Andronidis, A; Bawej, T ...
Journal of physics. Conference series,
12/2015, Letnik:
664, Številka:
8
Journal Article
Recenzirano
Odprti dostop
During the LHC Long Shutdown 1, the CMS Data Acquisition system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and prepare the ground ...for future upgrades of the detector front-ends. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. This approach provides additional decoupling between the HLT algorithms and the input and output data flow. All the metadata needed for bookkeeping of the data flow and the HLT process lifetimes are also generated in the form of small "documents" using the JSON encoding, by either services in the flow of the HLT execution (for rates etc.) or watchdog processes. These "files" can remain memory-resident or be written to disk if they are to be used in another part of the system (e.g. for aggregation of output data). We discuss how this redesign improves the robustness and flexibility of the CMS DAQ and the performance of the system currently being commissioned for the LHC Run 2.
The central trigger control system of the CMS experiment at CERN Jeitler, M.; Taurok, A.; Bergauer, H. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
05/2010, Letnik:
617, Številka:
1
Journal Article
Recenzirano
Odprti dostop
The Level-1 (L1) Trigger of the CMS experiment uses custom-made, fast electronics, while the experiment's high-level trigger is implemented in computer farms. The Central Trigger Control System ...described in this poster receives physics triggers from the Global Trigger Logic unit, collects information from the various subdetector systems to check if they are ready to accept triggers, reduces excessive trigger rates according to preset rules and finally distributes the trigger (“Level-1 Accept”) together with timing signals to the subdetectors over the so-called “Trigger, and Timing and Control” (TTC) network of the experiment. The complete functionality of the Central Trigger Control System is implemented in one 9U-VME module and several ancillary boards for input and output functions. The system has been used successfully during CMS test runs with cosmics and beam.
The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the ...high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.
The CMS data acquisition system software Bauer, G; Behrens, U; Biery, K ...
Journal of physics. Conference series,
04/2010, Letnik:
219, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data ...flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.