For the 2016 physics data runs, the L1 trigger system of the compact muon solenoid (CMS) experiment underwent a major upgrade to cope with the increasing instantaneous luminosity of the CERN LHC ...whilst maintaining a high event selection efficiency for the CMS physics program. Most subsystem specific trigger processor boards were replaced with powerful general purpose processor boards, conforming to the MicroTCA standard, whose tasks are performed by firmware on an field-programmable gate array of the Xilinx Virtex 7 family. Furthermore, the muon trigger system moved from a subsystem centered approach, where each of the three muon detector systems provides muon candidates to the global muon trigger (GMT), to a region-based system, where muon track finders (TFs) combine information from the subsystems to generate muon candidates in three detector regions that are then sent to the upgraded GMT. The upgraded GMT receives up to 108 muons from the processors of the muon TFs in the barrel, overlap, and endcap detector regions. The muons are sorted in two steps and duplicates are identified for removal. The first step treats muons from different processors of a TF in one detector region. Muons from TFs in different detector regions are compared in the second step. An isolation variable is calculated, using energy sums from the calorimeter trigger and added to each of the best eight muons before they are sent to the upgraded global trigger (GT) where the final trigger decision is made. The upgraded GMT algorithm is implemented on a general purpose processor board that uses optical links at 10 Gb/s to receive the input data from the muon TFs and the calorimeter energy sums, and to send the selected muon candidates to the upgraded GT.
Upgrade of the CMS muon trigger system in the barrel region Rabady, Dinyar; Ero, Janos; Flouris, Giannis ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
02/2017, Letnik:
845
Journal Article
Recenzirano
Odprti dostop
To maintain the excellent performance shown during the LHC's Run-1 the Level-1 Trigger of the Compact Muon Solenoid experiment underwent a significant upgrade. One part of this upgrade is the ...re-organization of the muon trigger path from a subsystem-centric view in which hits in the drift tubes (DT), the cathode strip chambers (CSC), and the resistive plate chambers (RPC) were treated separately in dedicated track-finding systems to one in which complementary detector systems for a given region (barrel, overlap, and endcap) are merged at the track-finding level. This fundamental restructuring of the muon trigger system required the development of a system to receive track candidates from the track-finding layer, remove potential duplicate tracks, and forward the best candidates to the global decision layer.
An overview will be given of the new track-finder system for the barrel region, the Barrel Muon Track Finder (BMTF), as well as the cancel-out and sorting layer: the upgraded Global Muon Trigger (μGMT). Both the BMTF and μGMT have been implemented in a Xilinx Virtex-7 card utilizing the microTCA architecture. While the BMTF improves on the proven and well-tested algorithms used in the Drift Tube Track Finder during Run-1, the μGMT is an almost complete re-development due to the re-organization of the underlying systems from track-finders for a specific detector to regional track finders covering a given area of the whole detector. Additionally the μGMT calculates a muon's isolation using energy information received from the calorimeter trigger. This information is added to the muon objects forwarded to the global decision layer, the so-called Global Trigger.
•Presented upgraded Global Muon Trigger and Barrel Muon Track Finder systems.•Upgraded system moves from sub-detector centric view to geometric-view.•To improve trigger performance.•Common hardware improves maintainability and increases development speed.•Use of isolation in Level-1 system provides opportunity for improved background rejection.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or ...upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 ...interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ "data center" are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.
40 MHz Level-1 Trigger Scouting for CMS Badaro, Gilbert; Behrens, Ulf; Branson, James ...
EPJ Web of Conferences,
01/2020, Letnik:
245
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
The CMS experiment will be upgraded for operation at the HighLuminosity LHC to maintain and extend its physics performance under extreme pileup conditions. Upgrades will include an entirely new ...tracking system, supplemented by a track finder processor providing tracks at Level-1, as well as a high-granularity calorimeter in the endcap region. New front-end and back-end electronics will also provide the Level-1 trigger with high-resolution information from the barrel calorimeter and the muon systems. The upgraded Level-1 processors, based on powerful FPGAs, will be able to carry out sophisticated feature searches with resolutions often similar to the offline ones, while keeping pileup effects under control. In this paper, we discuss the feasibility of a system capturing Level-1 intermediate data at the beam-crossing rate of 40 MHz and carrying out online analyzes based on these limited-resolution data. This 40 MHz scouting system would provide fast and virtually unlimited statistics for detector diagnostics, alternative luminosity measurements and, in some cases, calibrations. It has the potential to enable the study of otherwise inaccessible signatures, either too common to fit in the Level-1 accept budget, or with requirements which are orthogonal to “mainstream” physics, such as long-lived particles. We discuss the requirements and possible architecture of a 40 MHz scouting system, as well as some of the physics potential, and results from a demonstrator operated at the end of Run-2 using the Global Muon Trigger data from CMS. Plans for further demonstrators envisaged for Run-3 are also discussed.
The Data Acquisition (DAQ) system of the Compact Muon Solenoid (CMS) experiment at the LHC is a complex system responsible for the data readout, event building and recording of accepted events. Its ...proper functioning plays a critical role in the data-taking efficiency of the CMS experiment. In order to ensure high availability and recover promptly in the event of hardware or software failure of the subsystems, an expert system, the DAQ Expert, has been developed. It aims at improving the data taking efficiency, reducing the human error in the operations and minimising the on-call expert demand. Introduced in the beginning of 2017, it assists the shift crew and the system experts in recovering from operational faults, streamlining the post mortem analysis and, at the end of Run 2, triggering fully automatic recovery without human intervention. DAQ Expert analyses the real-time monitoring data originating from the DAQ components and the high-level trigger updated every few seconds. It pinpoints data flow problems, and recovers them automatically or after given operator approval. We analyse the CMS downtime in the 2018 run focusing on what was improved with the introduction of automated recovery; present challenges and design of encoding the expert knowledge into automated recovery jobs. Furthermore, we demonstrate the web-based, ReactJS interfaces that ensure an effective cooperation between the human operators in the control room and the automated recovery system. We report on the operational experience with automated recovery.
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events of 2MB at a rate of 100 kHz. The event builder collects event fragments from about 750 ...sources and assembles them into complete events which are then handed to the High-Level Trigger (HLT) processes running on
O
(1000) computers. The aging eventbuilding hardware will be replaced during the long shutdown 2 of the LHC taking place in 2019/20. The future data networks will be based on 100 Gb/s interconnects using Ethernet and Infiniband technologies. More powerful computers may allow to combine the currently separate functionality of the readout and builder units into a single I/O processor handling simultaneously 100 Gb/s of input and output traffic. It might be beneficial to preprocess data originating from specific detector parts or regions before handling it to generic HLT processors. Therefore, we will investigate how specialized coprocessors, e.g. GPUs, could be integrated into the event builder. We will present the envisioned changes to the event-builder compared to today’s system. Initial measurements of the performance of the data networks under the event-building traffic pattern will be shown. Implications of a folded network architecture for the event building and corresponding changes to the software implementation will be discussed.
It is curious that the unprecedented agitations in support of the rights of Caroline of Brunswick in 1820–21 have been represented as an “affair.” The word seems first to have been used by G. M. ...Trevelyan and was promptly seized on by Elie Halevy in his 1923 Histoire du peuple anglais au XIXe siècle. The labeling of this popular ebullience as an “affair” has consequently framed the development of its now not inconsiderable historiography. The episode was initially explained as a diversion from some main line of historical development, be it whiggish or Marxisant. More recently, historians have rescued the agitations from this condescension by showing how the radicals identified the king and the government's treatment of the queen as oppression and corruption at work. Since the common thread running through both whig and Marxisant accounts had been a concentration on the effects of the agitations on reform and radical politics, those attempting to put the episode back fully into their narratives emphasized the same factors. This time, however, it was to show that the agitations were not a diversion from the main line of reform politics. What follows is a further contribution to the process of giving greater attention to the queen's cause when telling the story of mass politics in this period, but one which concentrates on other neglected contexts and phenomena important for the explanation of this popular explosion. In the light of this, it may be necessary to change the way we refer to this episode.
The part of the CMS Data Acquisition (DAQ) system responsible for data readout and event building is a complex network of interdependent distributed applications. To ensure successful data taking, ...these programs have to be constantly monitored in order to facilitate the timeliness of necessary corrections in case of any deviation from specified behaviour. A large number of diverse monitoring data samples are periodically collected from multiple sources across the network. Monitoring data are kept in memory for online operations and optionally stored on disk for post-mortem analysis. We present a generic, reusable solution based on an open source NoSQL database, Elasticsearch, which is fully compatible and non-intrusive with respect to the existing system. The motivation is to benefit from an offthe-shelf software to facilitate the development, maintenance and support efforts. Elasticsearch provides failover and data redundancy capabilities as well as a programming language independent JSON-over-HTTP interface. The possibility of horizontal scaling matches the requirements of a DAQ
monitoring system. The data load from all sources is balanced by redistribution over an Elasticsearch cluster that can be hosted on a computer cloud. In order to achieve the necessary robustness and to validate the scalability of the approach the above monitoring solution currently runs in parallel with an existing in-house developed DAQ monitoring system.