The part of the CMS Data Acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, ...these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an offthe-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ
monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system.
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions ...in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 ...sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on
O
(1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, ...provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.
We present a search for large extra dimensions and dark matter pair-production using events with a photon and missing transverse energy in pp collisions at \(\sqrt{s} =8\) TeV. This search is done ...with the data taken by the CMS experiment at the LHC corresponding to an integrated luminosity of 19.6 fb\(^{-1}\). We find no deviations with respect to the standard model expectation and improve the current limits on several models.
The CMS experiment has been designed with a two-level trigger system: the
Level 1 (L1) Trigger, implemented on custom-designed electronics, and the High
Level Trigger (HLT), a streamlined version of ...the CMS reconstruction and
analysis software running on a computer farm. In order to achieve a good rate
reduction with as little as possible impact on the physics efficiency, the
algorithms used at HLT are designed to follow as closely as possible the ones
used in the offline reconstruction. Here, we will present the algorithms used
for the online reconstruction of electrons and photons (e/$\gamma$), both at L1
and HLT, and their performance and the planned improvements of these HLT
objects.
The search for dark matter is one of the main science drivers of the particle and astroparticle physics communities. Determining the nature of dark matter will require a broad approach, with a range ...of experiments pursuing different experimental hypotheses. Within this search program, collider experiments provide insights on dark matter which are complementary to direct/indirect detection experiments and to astrophysical evidence. To compare results from a wide variety of experiments, a common theoretical framework is required. The ATLAS and CMS experiments have adopted a set of simplified models which introduce two new particles, a dark matter particle and a mediator, and whose interaction strengths are set by the couplings of the mediator. So far, the presentation of LHC and future hadron collider results has focused on four benchmark scenarios with specific coupling values within these simplified models. In this work, we describe ways to extend those four benchmark scenarios to arbitrary couplings, and release the corresponding code for use in further studies. This will allow for more straightforward comparison of collider searches to accelerator experiments that are sensitive to smaller couplings, such as those for the US Community Study on the Future of Particle Physics (Snowmass 2021), and will give a more complete picture of the coupling dependence of dark matter collider searches when compared to direct and indirect detection searches. By using semi-analytical methods to rescale collider limits, we drastically reduce the computing resources needed relative to traditional approaches based on the generation of additional simulated signal samples.
The CMS experiment has been designed with a two-level trigger system: the Level 1 (L1) Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of ...the CMS reconstruction and analysis software running on a computer farm. In order to achieve a good rate reduction with as little as possible impact on the physics efficiency, the algorithms used at HLT are designed to follow as closely as possible the ones used in the offline reconstruction. Here, we will present the algorithms used for the online reconstruction of electrons and photons (e/\(\gamma\)), both at L1 and HLT, and their performance and the planned improvements of these HLT objects.
Expanding the mass range and techniques by which we search for dark matter is an important part of the worldwide particle physics program. Accelerator-based searches for dark matter and dark sector ...particles are a uniquely compelling part of this program as a way to both create and detect dark matter in the laboratory and explore the dark sector by searching for mediators and excited dark matter particles. This paper focuses on developing the DarkQuest experimental concept and gives an outlook on related enhancements collectively referred to as LongQuest. DarkQuest is a proton fixed-target experiment with leading sensitivity to an array of visible dark sector signatures in the MeV-GeV mass range. Because it builds off of existing accelerator and detector infrastructure, it offers a powerful but modest-cost experimental initiative that can be realized on a short timescale.
In this work, we consider the case of a strongly coupled dark/hidden sector, which extends the Standard Model (SM) by adding an additional non-Abelian gauge group. These extensions generally contain ...matter fields, much like the SM quarks, and gauge fields similar to the SM gluons. We focus on the exploration of such sectors where the dark particles are produced at the LHC through a portal and undergo rapid hadronization within the dark sector before decaying back, at least in part and potentially with sizeable lifetimes, to SM particles, giving a range of possibly spectacular signatures such as emerging or semi-visible jets. Other, non-QCD-like scenarios leading to soft unclustered energy patterns or glueballs are also discussed. After a review of the theory, existing benchmarks and constraints, this work addresses how to build consistent benchmarks from the underlying physical parameters and present new developments for the PYTHIA Hidden Valley module, along with jet substructure studies. Finally, a series of improved search strategies is presented in order to pave the way for a better exploration of the dark showers at the LHC.