The second phase of the LHC, the High-Luminosity LHC, is scheduled to start in 2029, after a shutdown during which the beam intensity and focusing will be significantly upgraded. For this HL-LHC era, ...also the CMS detector will receive an extensive upgrade, primarily to maintain its physics performance at increasing pileup. The Phase-2 CMS Level-1 trigger rate will increase to 750 kHz, for an estimated data rate in excess of 50 Tbit/s. The Phase-2 CMS off-detector electronics will be based on the ATCA standard, with back-end boards receiving the detector data from the on-detector front-ends via custom, radiation-tolerant, optical links. The CMS Phase-2 data acquisition design tightens the integration between trigger control and data flow, extending the synchronous regime of the DAQ system. At the core of the design is the DAQ and Timing Hub, a custom ATCA hub card forming the bridge between the different, detector-specific, control and readout electronics and the common timing, trigger, and control systems. The overall synchronisation and data flow of the experiment is handled by the Trigger and Timing Control and Distribution System. For increased flexibility during commissioning and calibration runs, the design of the Phase-2 trigger and timing distribution system breaks with the traditional distribution tree, in favour of a configurable network connecting multiple independent control units to all off-detector endpoints. In order to reduce the number of custom hardware designs required, the DAQ hardware is designed such that it can also be used to implement the Trigger and Timing Control and Distribution System.
The High Luminosity LHC (HL-LHC) will start operating in 2027 after the third Long Shutdown (LS3), and is designed to provide an ultimate instantaneous luminosity of 7:5 × 10
34
cm
−2
s
−1
, at the ...price of extreme pileup of up to 200 interactions per crossing. The number of overlapping interactions in HL-LHC collisions, their density, and the resulting intense radiation environment, warrant an almost complete upgrade of the CMS detector. The upgraded CMS detector will be read out by approximately fifty thousand highspeed front-end optical links at an unprecedented data rate of up to 80 Tb/s, for an average expected total event size of approximately 8 − 10 MB. Following the present established design, the CMS trigger and data acquisition system will continue to feature two trigger levels, with only one synchronous hardware-based Level-1 Trigger (L1), consisting of custom electronic boards and operating on dedicated data streams, and a second level, the High Level Trigger (HLT), using software algorithms running asynchronously on standard processors and making use of the full detector data to select events for offline storage and analysis. The upgraded CMS data acquisition system will collect data fragments for Level-1 accepted events from the detector back-end modules at a rate up to 750 kHz, aggregate fragments corresponding to individual Level- 1 accepts into events, and distribute them to the HLT processors where they will be filtered further. Events accepted by the HLT will be stored permanently at a rate of up to 7.5 kHz. This paper describes the baseline design of the DAQ and HLT systems for the Phase-2 of CMS.
40 MHz Level-1 Trigger Scouting for CMS Badaro, Gilbert; Behrens, Ulf; Branson, James ...
EPJ Web of Conferences,
01/2020, Letnik:
245
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
The CMS experiment will be upgraded for operation at the HighLuminosity LHC to maintain and extend its physics performance under extreme pileup conditions. Upgrades will include an entirely new ...tracking system, supplemented by a track finder processor providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end electronics will also provide the Level-1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded Level-1 processors, based on powerful FPGAs, will be able to carry out sophisticated feature searches with resolutions often similar to the offline ones, while keeping pileup effects under control. In this paper, we discuss the feasibility of a system capturing Level-1 intermediate data at the beam-crossing rate of 40 MHz and carrying out online analyzes based on these limited-resolution data. This 40 MHz scouting system would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations. It has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the Level-1 accept budget, or with requirements which are orthogonal to “mainstream” physics, such as long-lived particles. We discuss the requirements and possible architecture of a 40 MHz scouting system, as well as some of the physics potential, and results from a demonstrator operated at the end of Run-2 using the Global Muon Trigger data from CMS. Plans for further demonstrators envisaged for Run-3 are also discussed.
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at the LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its ...proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of the subsystems, an expert system, the DAQ Expert, has been developed. It aims at improving the data taking efficiency, reducing the human error in the operations and minimising the on-call expert demand. Introduced in the beginning of 2017, it assists the shift crew and the system experts in recovering from operational faults, streamlining the post mortem analysis and, at the end of Run 2, triggering fully automatic recovery without human intervention. DAQ Expert analyses the real-time monitoring data originating from the DAQ components and the high-level trigger updated every few seconds. It pinpoints data flow problems, and recovers them automatically or after given operator approval. We analyse the CMS downtime in the 2018 run focusing on what was improved with the introduction of automated recovery; present challenges and design of encoding the expert knowledge into automated recovery jobs. Furthermore, we demonstrate the web-based, ReactJS interfaces that ensure an effective cooperation between the human operators in the control room and the automated recovery system. We report on the operational experience with automated recovery.
A 40 MHz Level-1 trigger scouting system for the CMS Phase-2 upgrade Ardino, Rocco; Deldicque, Christian; Dobson, Marc ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
02/2023, Letnik:
1047
Journal Article
Recenzirano
Odprti dostop
The CMS Phase-2 upgrade for the HL-LHC aims at preserving and expanding the current physics capability of the experiment under extreme pileup conditions. A new tracking system incorporates a track ...finder processor, providing tracks to the Level-1 (L1) trigger. A new high-granularity calorimeter provides fine-grained energy deposition information in the endcap region. New front-end and back-end electronics feed the L1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded L1 will be based primarily on the Xilinx Ultrascale Plus series of FPGAs, capable of sophisticated feature searches with resolution often similar to the offline reconstruction. The L1 Data Scouting system (L1DS) will capture L1 intermediate data produced by the trigger processors at the beam-crossing rate of 40 MHz, and carry out online analyses based on these limited-resolution data. The L1DS will provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements, and, in some cases, calibrations. It also has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the L1 trigger accept budget or with requirements that are orthogonal to “mainstream” physics. The requirements and architecture of the L1DS system are presented, as well as some of the potential physics opportunities under study. The first results from the assembly and commissioning of a demonstrator currently being installed for LHC Run-3 are also presented. The demonstrator collects data from the Global Muon Trigger, the Layer-2 Calorimeter Trigger, the Barrel Muon Track Finder, and the Global Trigger systems of the current CMS L1. This demonstrator, as a data acquisition (DAQ) system operating at the LHC bunch-crossing rate, faces many of the challenges of the Phase-2 system, albeit with scaled-down connectivity, reduced data throughput and physics capabilities, providing a testing ground for new techniques of online data reduction and processing.
Better-than-high-definition-resolution video content (such as 4K) is already being used in some areas, such as scientific visualization and film post-production. Effective collaboration in these ...areas requires real-time transfers of such video content. Two of the main technical issues are high-data volume and time synchronization when transferring over an asynchronous network such as the current Internet.
In this article, we discuss design options for a real-time long-distance uncompressed 4K video transfer system. We present our practical experience with such transfers and show how they can be used to increase productivity in film post-production, as an application example.
► Low-latency high-definition video streaming enables remote team collaboration. ► We present architecture for transfer of synchronous video data over packet networks. ► We present options for receiver rendering rate adaptation. ► We present practical experience in a use case in film post-production.
The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the ...experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of data-taking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.
The New CMS DAQ System for Run-2 of the LHC Bawej, Tomasz; Behrens, Ulf; Branson, James ...
IEEE transactions on nuclear science,
06/2015, Letnik:
62, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The data acquisition (DAQ) system of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s to the ...high level trigger (HLT) farm. The HLT farm selects interesting events for storage and offline analysis at a rate of around 1 kHz. The DAQ system has been redesigned during the accelerator shutdown in 2013/14. The motivation is twofold: Firstly, the current compute nodes, networking, and storage infrastructure will have reached the end of their lifetime by the time the LHC restarts. Secondly, in order to handle higher LHC luminosities and event pileup, a number of sub-detectors will be upgraded, increasing the number of readout channels and replacing the off-detector readout electronics with a μTCA implementation. The new DAQ architecture will take advantage of the latest developments in the computing industry. For data concentration, 10/40 Gb/s Ethernet technologies will be used, as well as an implementation of a reduced TCP/IP in FPGA for a reliable transport between custom electronics and commercial computing hardware. A Clos network based on 56 Gb/s FDR Infiniband has been chosen for the event builder with a throughput of ~ 4 Tb/s. The HLT processing is entirely file based. This allows the DAQ and HLT systems to be independent, and to use the HLT software in the same way as for the offline processing. The fully built events are sent to the HLT with 1/10/40 Gb/s Ethernet via network file systems. Hierarchical collection of HLT accepted events and monitoring meta-data are stored into a global file system. This paper presents the requirements, technical choices, and performance of the new system.