The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 ...sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on
O
(1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.
The part of the CMS Data Acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, ...these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an offthe-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ
monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system.
The primary goal of the online cluster of the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) is to build event data from the detector and to select interesting collisions ...in the High Level Trigger (HLT) farm for offline storage. With more than 1500 nodes and a capacity of about 850 kHEPSpecInt06, the HLT machines represent similar computing capacity of all the CMS Tier1 Grid sites together. Moreover, it is currently connected to the CERN IT datacenter via a dedicated 160 Gbps network connection and hence can access the remote EOS based storage with a high bandwidth. In the last few years, a cloud overlay based on OpenStack has been commissioned to use these resources for the WLCG when they are not needed for data taking. This online cloud facility was designed for parasitic use of the HLT, which must never interfere with its primary function as part of the DAQ system. It also allows to abstract from the different types of machines and their underlying segmented networks. During the LHC technical stop periods, the HLT cloud is set to its static mode of operation where it acts like other grid facilities. The online cloud was also extended to make dynamic use of resources during periods between LHC fills. These periods are a-priori unscheduled and of undetermined length, typically of several hours, once or more a day. For that, it dynamically follows LHC beam states and hibernates Virtual Machines (VM) accordingly. Finally, this work presents the design and implementation of a mechanism to dynamically ramp up VMs when the DAQ load on the HLT reduces towards the end of the fill.
The data acquisition (DAQ) system of the Compact Muon Solenoid (CMS) at CERN reads out the detector at the level-1 trigger accept rate of 100 kHz, assembles events with a bandwidth of 200 GB/s, ...provides these events to the high level-trigger running on a farm of about 30k cores and records the accepted events. Comprising custom-built and cutting edge commercial hardware and several 1000 instances of software applications, the DAQ system is complex in itself and failures cannot be completely excluded. Moreover, problems in the readout of the detectors,in the first level trigger system or in the high level trigger may provoke anomalous behaviour of the DAQ systemwhich sometimes cannot easily be differentiated from a problem in the DAQ system itself. In order to achieve high data taking efficiency with operators from the entire collaboration and without relying too heavily on the on-call experts, an expert system, the DAQ-Expert, has been developed that can pinpoint the source of most failures and give advice to the shift crew on how to recover in the quickest way. The DAQ-Expert constantly analyzes monitoring data from the DAQ system and the high level trigger by making use of logic modules written in Java that encapsulate the expert knowledge about potential operational problems. The results of the reasoning are presented to the operator in a web-based dashboard, may trigger sound alerts in the control room and are archived for post-mortem analysis - presented in a web-based timeline browser. We present the design of the DAQ-Expert and report on the operational experience since 2017, when it was first put into production.
The Compact Muon Solenoid (CMS) is one of the experiments at the CERN Large Hadron Collider (LHC). The CMS Online Monitoring system (OMS) is an upgrade and successor to the CMS Web-Based Monitoring ...(WBM)system, which is an essential tool for shift crew members, detector subsystem experts, operations coordinators, and those performing physics analyses. The CMS OMS is divided into aggregation and presentation layers. Communication between layers uses RESTful JSON:API compliant requests. The aggregation layer is responsible for collecting data from heterogeneous sources, storage of transformed and pre-calculated (aggregated) values and exposure of data via the RESTful API. The presentation layer displays detector information via a modern, user-friendly and customizable web interface. The CMS OMS user interface is composed of a set of cutting-edge software frameworks and tools to display non-event data to any authenticated CMS user worldwide. The web interface tree-like component structure comprises (top-down): workspaces, folders, pages, controllers and portlets. A clear hierarchy gives the required flexibility and control for content organization. Each bottom element instantiates a portlet and is a reusable component that displays a single aspect of data, like a table, a plot, an article, etc. Pages consist of multiple different portlets and can be customized at runtime by using a drag-and-drop technique. This is how a single page can easily include information from multiple online sources. Different pages give access to a summary of the current status of the experiment, as well as convenient access to historical data. This paper describes the CMS OMS architecture, core concepts and technologies of the presentation layer.
This paper outlines a software architecture where zero-copy operations are used comprehensively at every processing point from the Application layer to the Physical layer. The proposed architecture ...is being used during feasibility studies on advanced networking technologies for the CMS experiment at CERN. The design relies on a homogeneous peer-to-peer message passing system, which is built around memory pool caches allowing efficient and deterministic latency handling of messages of any size through the different software layers. In this scheme portable distributed applications can be programmed to process input to output operations by mere pointer arithmetic and DMA operations only. The approach combined with the open fabric protocol stack (OFED) allows one to attain near wire-speed message transfer at application level. The architecture supports full portability of user applications by encapsulating the protocol details and network into modular peer transport services whereas a transparent replacement of the underlying protocol facilitates deployment of several network technologies like Gigabit Ethernet, Myrinet, Infiniband, etc. Therefore, this solution provides a protocol-independent communication framework and prevents having to deal with potentially difficult couplings when the underlying communication infrastructure is changed. We demonstrate the feasibility of this approach by giving efficiency and performance measurements of the software in the context of the CMS distributed event building studies.
The second phase of the LHC, the High-Luminosity LHC, is scheduled to start in 2029, after a shutdown during which the beam intensity and focusing will be significantly upgraded. For this HL-LHC era, ...also the CMS detector will receive an extensive upgrade, primarily to maintain its physics performance at increasing pileup. The Phase-2 CMS Level-1 trigger rate will increase to 750 kHz, for an estimated data rate in excess of 50 Tbit/s. The Phase-2 CMS off-detector electronics will be based on the ATCA standard, with back-end boards receiving the detector data from the on-detector front-ends via custom, radiation-tolerant, optical links. The CMS Phase-2 data acquisition design tightens the integration between trigger control and data flow, extending the synchronous regime of the DAQ system. At the core of the design is the DAQ and Timing Hub, a custom ATCA hub card forming the bridge between the different, detector-specific, control and readout electronics and the common timing, trigger, and control systems. The overall synchronisation and data flow of the experiment is handled by the Trigger and Timing Control and Distribution System. For increased flexibility during commissioning and calibration runs, the design of the Phase-2 trigger and timing distribution system breaks with the traditional distribution tree, in favour of a configurable network connecting multiple independent control units to all off-detector endpoints. In order to reduce the number of custom hardware designs required, the DAQ hardware is designed such that it can also be used to implement the Trigger and Timing Control and Distribution System.
Interleukin-8 is a cytokine produced by mononuclear cells that is involved in polymorphonuclear neutrophil leukocyte (PMN) recruitment and activation. Several studies have previously demonstrated a ...leukocyte activation during hypercholesterolemia and 3-hydroxy-3-methylglutaryl coenzyme A (HMG-CoA) reductase inhibitors have been found to play a role in the prevention of atherothrombotic disease. The purpose of this study was to determine interleukin-8 (IL-8) mRNA expression and ex vivo production from peripheral blood mononuclear cells (PBMCs) and IL-8-dependent PMN activation of hypercholesterolemic (HC) patients with respect to normocholesterolemic (NC) subjects. Using Northern blot analysis, we found a four- and threefold increase in the amount of IL-8 transcript in PBMC from HC patients, in unstimulated and LPS stimulated cultures, respectively. A specific immunoassay showed a correspondingly significant increase of IL-8 immunoactivity in the conditioned medium of PBMC from HC subjects as compared with controls (unstimulated PBMC: 15±4 vs. 4.2±3 ng/ml;
P<0.0001; LPS stimulated PBMC: 65.3±8 vs. 36.6±9 ng/ml;
P<0.0001). PMN of HC patients stimulated with IL-8 showed a reduced elastase release with respect to NC subjects before physiological granule release after f-Met-Leu-Phe (fMLP) treatment. These results indicate an upregulation of the IL-8 system in dyslipidemic patients and provide evidence for ongoing in vivo IL-8-dependent PMN activation during hypercholesterolemia.
The Online Monitoring System (OMS) at the Compact Muon Solenoid experiment (CMS) at CERN aggregates and integrates different sources of information into a central place and allows users to view, ...compare and correlate information. It displays real-time and historical information. The tool is heavily used by run coordinators, trigger experts and shift crews, to ensure the quality and efficiency of data taking. It provides aggregated information for many use cases including data certification. OMS is the successor of Web Based Monitoring (WBM), which was in use during Run 1 and Run 2 of the LHC. WBM started as a small tool and grew substantially over the years so that maintenance became challenging. OMS was developed from scratch following several design ideas: to strictly separate the presentation layer from the data aggregation layer, to use a well-defined standard for the communication between presentation layer and aggregation layer, and to employ widely used frameworks from outside the HEP community. A report on the experience from the operation of OMS for the first year of data taking of Run 3 in 2022 is presented.