The message logging system provides the infrastructure for all of the distributed processes in the data acquisition (DAQ) to report status messages of various severities in a consistent manner to a ...central location, as well as providing the tools for displaying and archiving the messages. The message logging system has been developed over a decade, and has been run successfully on CDF and CMS experiments. The most recent work to the message logging system is to build it as a stand-alone package with the name MessageFacility which works for any generic framework or applications, with NOνA as the first driving user. System designs and architectures, as well as the efforts of making it a generic library will be discussed. We also present new features that have been added.
The message logging system provides the infrastructure for all of the distributed processes in the data acquisition (DAQ) to report status messages of various severities in a consistent manner to a ...central location, as well as providing the tools for displaying and archiving the messages. The message logging system has been developed over a decade, and has been run successfully on CDF and CMS experiments. The most recent work to the message logging system is to build it as a stand-alone package with the name MessageFacility which works for any generic framework or applications, with NOvA as the first driving user. System designs and architectures, as well as the efforts of making it a generic library will be discussed. We also present new features that have been added.
We report the first results of DarkSide-50, a direct search for dark matter operating in the underground Laboratori Nazionali del Gran Sasso (LNGS) and searching for the rare nuclear recoils possibly ...induced by weakly interacting massive particles (WIMPs). The dark matter detector is a Liquid Argon Time Projection Chamber with a (46.4±0.7) kg active mass, operated inside a 30 t organic liquid scintillator neutron veto, which is in turn installed at the center of a 1 kt water Cherenkov veto for the residual flux of cosmic rays. We report here the null results of a dark matter search for a (1422±67) kgd exposure with an atmospheric argon fill. This is the most sensitive dark matter search performed with an argon target, corresponding to a 90% CL upper limit on the WIMP-nucleon spin-independent cross section of 6.1×10−44 cm2 for a WIMP mass of 100 Gev/c2.
The CDF plug upgrade electromagnetic calorimeter: test beam results Albrow, M.; Aota, S.; Apollinari, G. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
03/2002, Letnik:
480, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The CDF Plug Upgrade calorimeter, which fully exploits the tile–fiber technique, was tested at the Fermilab meson beamline. The calorimeter was exposed to positron, positively charged pion and ...positive muon beams with energies in the range of 5–
230
GeV
. The energy resolution of the electromagnetic calorimeter to the positron beam is consistent with the design value of
16%/
E
⊕1%
, where E is the energy in units of GeV and ⊕ represents sum in quadrature. The non-linearity for positrons is studied in an energy range of 11–
181
GeV
. It is important to incorporate the response of the preshower detector, the first layer of the electromagnetic calorimeter which is readout separately, into that of the calorimeter to reduce the non-linearity to 1% or less. The energy scale is about
1.46
pC/
GeV
with HAMAMATSU R4125 operated typically at a gain of 2.5×10
4. The response non-uniformity over the surface of a tower of the electromagnetic calorimeter is found to be about 2% with
57
GeV
positrons. Studies of several detailed detector characteristics are also presented.
The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the ...high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.
The CMS data acquisition system software Bauer, G; Behrens, U; Biery, K ...
Journal of physics. Conference series,
04/2010, Letnik:
219, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The CMS data acquisition system is made of two major subsystems: event building and event filter. The presented paper describes the architecture and design of the software that processes the data ...flow in the currently operating experiment. The central DAQ system relies on industry standard networks and processing equipment. Adopting a single software infrastructure in all subsystems of the experiment imposes, however, a number of different requirements. High efficiency and configuration flexibility are among the most important ones. The XDAQ software infrastructure has matured over an eight years development and testing period and has shown to be able to cope well with the requirements of the CMS experiment.
Several current and proposed experiments at the Fermi National Accelerator Laboratory, Batavia, IL, USA, have novel data acquisition needs. These include 1) continuous digitization, using commercial ...high-speed digitizers, of signals from the detectors, 2) the transfer of all of the digitized waveform data to commercial off-the-shelf (COTS) processors, 3) the filtering or compression of the waveform data, or both, and 4) the writing of the resultant data to disk for later, more complete, analysis. To address these needs, members of the Accelerator and Detector Simulation and Support Department within the Scientific Computing Division at Fermilab are using parallel processing technologies in the development of artdaq, a generic data acquisition toolkit. The artdaq toolkit uses Message Passing Interface (MPI) and art, an established event-processing framework shared by new experiments at Fermilab. In an artdaq program, the digitized data are transferred into computing nodes using commodity Peripheral Component Interconnect Express (PCIe) cards, and event fragments are transferred between distributed processes using MPI and assembled into complete events. These events are then processed by a configurable selection of user-specified algorithms, commonly including filtering and compression algorithms, using the art event-processing framework. This paper describes the architecture and implementation of the first phase of the artdaq toolkit and shows early performance results with configurations that match upcoming experiments both at Fermilab and elsewhere.