The ATLAS ReadOut System (ROS) receives data fragments from ∼1600 detector readout links, buffers them and provides them on demand to the second-level trigger or to the event building system. The ROS ...is implemented with ∼150PCs. Each PC houses a few, typically 4, custom-built PCI boards (ROBIN) and a 4-port PCIe Gigabit Ethernet NIC. The PCs run a multi-threaded object-oriented application managing the requests for data retrieval and for data deletion coming through the NIC, and the collection and output of data from the ROBINs. At a nominal event fragment arrival rate of 75kHz the ROS has to concurrently service up to approximately 20kHz of data requests from the second-level trigger and up to 3.5kHz of requests from event building nodes. The full system has been commissioned in 2007. Performance of the system in terms of stability and reliability, results of laboratory rate capability measurements and upgrade scenarios are discussed in this paper.
In the ATLAS experiment at the LHC, the ROD Crate DAQ provides a complete software framework to implement data acquisition functionality at the boundary between the detector specific electronics and ...the common part of the data acquisition system. Based on a plugin mechanism, it allows selecting and using common services (like data output and data monitoring channels) and developing software to control and acquire data from detector specific modules providing the infrastructure for control, monitoring and calibration. Including also event building functionality, the ROD Crate DAQ is intended to be the main data acquisition tool for the first phase of detector commissioning. This paper presents the design, functionality and performance of the ROD Crate DAQ and its usage in the ATLAS data acquisition system and during detector tests.
A data acquisition (DAQ) system has been developed which will read out and control calorimeters serving as prototype systems for a future detector at an electron-positron linear collider. This is a ...modular, flexible and scalable DAQ system in which the hardware and signals are standards-based, using FPGAs and serial links. The idea of a backplaneless system was also pursued with a commercial development board housed in a PC and a chain of concentrator cards between it and the detector forming the basis of the system. As well as describing the concept and performance of the system, its merits and disadvantages are discussed.
The ATLAS Event Builder Vandelli, W.; Abolins, M.; Battaglia, A. ...
IEEE transactions on nuclear science,
12/2008, Letnik:
55, Številka:
6
Journal Article
Recenzirano
Odprti dostop
Event data from proton-proton collisions at the LHC will be selected by the ATLAS experiment in a three-level trigger system, which, at its first two trigger levels (LVL1+LVL2), reduces the initial ...bunch crossing rate of 40 MHz to ~ 3 kHz. At this rate, the Event Builder collects the data from the readout system PCs (ROSs) and provides fully assembled events to the Event Filter (EF). The EF is the third trigger level and its aim is to achieve a further rate reduction to ~ 200 Hz on the permanent storage. The Event Builder is based on a farm of O (100) PCs, interconnected via a gigabit Ethernet to O (150) ROSs. These PCs run Linux and multi-threaded software applications implemented in C++. All the ROSs, and substantial fractions of the Event Builder and EF PCs have been installed and commissioned. We report on performance tests on this initial system, which is capable of going beyond the required data rates and bandwidths for event building for the ATLAS experiment.
The full LEP-1 data set collected with the ALEPH detector at the
Z pole during 1991–1995 is analysed in order to measure the
τ
decay branching fractions. The analysis follows the global method used ...in the published study based on 1991–1993 data, but several improvements are introduced, especially concerning the treatment of photons and
π
0
's. Extensive systematic studies are performed, in order to match the large statistics of the data sample corresponding to over 300
000 measured and identified
τ
decays. Branching fractions are obtained for the two leptonic channels and 11 hadronic channels defined by their respective numbers of charged particles and
π
0
's. Using previously published ALEPH results on final states with charged and neutral kaons, corrections are applied to the hadronic channels to derive branching ratios for exclusive final states without kaons. Thus the analyses of the full LEP-1 ALEPH data are combined to yield a complete description of
τ
decays, encompassing 22 non-strange and 11 strange hadronic modes. Some physics implications of the results are given, in particular related to universality in the leptonic charged weak current, isospin invariance in
a
1
decays, and the separation of vector and axial-vector components of the total hadronic rate. Finally, spectral functions are determined for the dominant hadronic modes and updates are given for several analyses. These include: tests of isospin invariance between the weak charged and electromagnetic hadronic currents, fits of the
ρ
resonance lineshape, and a QCD analysis of the non-strange hadronic decays using spectral moments, yielding the value
α
s
(
m
τ
2
)
=
0.340
±
0
.
005
exp
±
0
.
014
th
. The evolution to the
Z mass scale yields
α
s
(
M
Z
2
)
=
0.1209
±
0.0018
. This value agrees well with the direct determination from the
Z width and provides the most accurate test to date of asymptotic freedom in the QCD gauge theory.
In the ATLAS experiment at the LHC, the output of read-out hardware specific to each subdetector will be transmitted to buffers, located on custom made PCI cards ("ROBINs"). The data consist of ...fragments of events accepted by the first-level trigger at a maximum rate of 100 kHz. Groups of four ROBINs will be hosted in about 150 Read-Out Subsystem (ROS) PCs. Event data are forwarded on request via Gigabit Ethernet links and switches to the second-level trigger or to the Event builder. In this paper a discussion of the functionality and real-time properties of the ROS is combined with a presentation of measurement and modelling results for a testbed with a size of about 20% of the final DAQ system. Experimental results on strategies for optimizing the system performance, such as utilization of different network architectures and network transfer protocols, are presented for the testbed, together with extrapolations to the full system.