The data acquisition system of the CMS experiment, at the CERN LHC collider, is designed to build 1 MB events at a sustained rate of 100 kHz and to provide sufficient computing power to filter the ...events by a factor of 1000. The data to surface (D2S) system is the first layer of the data acquisition interfacing the underground subdetector readout electronics to the surface event builder. It collects the 100 GB/s input data from a large number of front-end cards (650), implements a first stage event building by combining multiple sources into larger-size data fragments, and transports them to the surface for the full event building. The data to surface system can operate at the maximum rate of 2 Tbps. This paper describes the layout, reconfigurability and production validation of the D2S system which is to be installed by December 2005
The data acquisition system of the CMS experiment at the Large Hadron
Collider will employ an event builder which will combine data from about 500
data sources into full events at an aggregate ...throughput of 100 GByte/s.
Several architectures and switch technologies have been evaluated for the DAQ
Technical Design Report by measurements with test benches and by simulation.
This paper describes studies of an EVB test-bench based on 64 PCs acting as
data sources and data consumers and employing both Gigabit Ethernet and Myrinet
technologies as the interconnect. In the case of Ethernet, protocols based on
Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies,
including measurements on throughput and scaling are presented.
The architecture of the baseline CMS event builder will be outlined. The
event builder is organised into two stages with intelligent buffers in between.
The first stage contains 64 switches performing a first level of data
concentration by building super-fragments from fragments of 8 data sources. The
second stage combines the 64 super-fragments into full events. This
architecture allows installation of the second stage of the event builder in
steps, with the overall throughput scaling linearly with the number of switches
in the second stage. Possible implementations of the components of the event
builder are discussed and the expected performance of the full event builder is
outlined.
ECONFC0303241:MOGT008,2003 XDAQ is a generic data acquisition software environment that emerged from a
rich set of of use-cases encountered in the CMS experiment. They cover not the
deployment for ...multiple sub-detectors and the operation of different processing
and networking equipment as well as a distributed collaboration of users with
different needs. The use of the software in various application scenarios
demonstrated the viability of the approach. We discuss two applications, the
tracker local DAQ system for front-end commissioning and the muon chamber
validation system. The description is completed by a brief overview of XDAQ.
The Run Control and Monitor System (RCMS) of the CMS experiment is the set of
hardware and software components responsible for controlling and monitoring the
experiment during data-taking. It ...provides users with a "virtual counting
room", enabling them to operate the experiment and to monitor detector status
and data quality from any point in the world. This paper describes the
architecture of the RCMS with particular emphasis on its scalability through a
distributed collection of nodes arranged in a tree-based hierarchy. The current
implementation of the architecture in a prototype RCMS used in test beam
setups, detector validations and DAQ demonstrators is documented. A discussion
of the key technologies used, including Web Services, and the results of tests
performed with a 128-node system are presented.
The data acquisition system of the CMS experiment at the Large Hadron Collider will employ an event builder which will combine data from about 500 data sources into full events at an aggregate ...throughput of 100 GByte/s. Several architectures and switch technologies have been evaluated for the DAQ Technical Design Report by measurements with test benches and by simulation. This paper describes studies of an EVB test-bench based on 64 PCs acting as data sources and data consumers and employing both Gigabit Ethernet and Myrinet technologies as the interconnect. In the case of Ethernet, protocols based on Layer-2 frames and on TCP/IP are evaluated. Results from ongoing studies, including measurements on throughput and scaling are presented. The architecture of the baseline CMS event builder will be outlined. The event builder is organised into two stages with intelligent buffers in between. The first stage contains 64 switches performing a first level of data concentration by building super-fragments from fragments of 8 data sources. The second stage combines the 64 super-fragments into full events. This architecture allows installation of the second stage of the event builder in steps, with the overall throughput scaling linearly with the number of switches in the second stage. Possible implementations of the components of the event builder are discussed and the expected performance of the full event builder is outlined.
XDAQ is a generic data acquisition software environment that emerged from a rich set of of use-cases encountered in the CMS experiment. They cover not the deployment for multiple sub-detectors and ...the operation of different processing and networking equipment as well as a distributed collaboration of users with different needs. The use of the software in various application scenarios demonstrated the viability of the approach. We discuss two applications, the tracker local DAQ system for front-end commissioning and the muon chamber validation system. The description is completed by a brief overview of XDAQ.
The Run Control and Monitor System (RCMS) of the CMS experiment is the set of hardware and software components responsible for controlling and monitoring the experiment during data-taking. It ...provides users with a "virtual counting room", enabling them to operate the experiment and to monitor detector status and data quality from any point in the world. This paper describes the architecture of the RCMS with particular emphasis on its scalability through a distributed collection of nodes arranged in a tree-based hierarchy. The current implementation of the architecture in a prototype RCMS used in test beam setups, detector validations and DAQ demonstrators is documented. A discussion of the key technologies used, including Web Services, and the results of tests performed with a 128-node system are presented.
The BaBar Collaboration has operated a system covering over 2000 m/sup 2/ with resistive plate chambers for nearly three years. The chambers are constructed of bakelite sheets separated by 2 mm. The ...inner surfaces are coated with linseed oil. This system provides muon and neutral hadron detection for BaBar. Installation and commissioning were completed in 1998, and operation began mid-year 1999. While initial performance of the system reached design, over time, a significant fraction of the RPCs demonstrated significant degradation, marked by increased currents and reduced efficiency. A coordinated effort of investigations have identified many of the elements responsible for the degradation.
New sets of CMS underlying-event parameters (“tunes”) are presented for the
pythia
8 event generator. These tunes use the NNPDF3.1 parton distribution functions (PDFs) at leading (LO), ...next-to-leading (NLO), or next-to-next-to-leading (NNLO) orders in perturbative quantum chromodynamics, and the strong coupling evolution at LO or NLO. Measurements of charged-particle multiplicity and transverse momentum densities at various hadron collision energies are fit simultaneously to determine the parameters of the tunes. Comparisons of the predictions of the new tunes are provided for observables sensitive to the event shapes at LEP, global underlying event, soft multiparton interactions, and double-parton scattering contributions. In addition, comparisons are made for observables measured in various specific processes, such as multijet, Drell–Yan, and top quark-antiquark pair production including jet substructure observables. The simulation of the underlying event provided by the new tunes is interfaced to a higher-order matrix-element calculation. For the first time, predictions from
pythia
8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
The observation of the standard model (SM) Higgs boson decay to a pair of bottom quarks is presented. The main contribution to this result is from processes in which Higgs bosons are produced in ...association with a W or Z boson (VH), and are searched for in final states including 0, 1, or 2 charged leptons and two identified bottom quark jets. The results from the measurement of these processes in a data sample recorded by the CMS experiment in 2017, comprising 41.3 fb^{-1} of proton-proton collisions at sqrts=13 TeV, are described. When combined with previous VH measurements using data collected at sqrts=7, 8, and 13 TeV, an excess of events is observed at m_{H}=125 GeV with a significance of 4.8 standard deviations, where the expectation for the SM Higgs boson is 4.9. The corresponding measured signal strength is 1.01±0.22. The combination of this result with searches by the CMS experiment for H→bbover ¯ in other production processes yields an observed (expected) significance of 5.6 (5.5) standard deviations and a signal strength of 1.04±0.20.