ATLAS is a Physics experiment that explores high-energy particle collisions at the Large Hadron Collider at CERN. It uses tens of millions of electronics channels to capture the outcome of the ...particle bunches crossing each other every 25 ns. Since reading out and storing the complete information is not feasible (∼100 TB s), ATLAS makes use of a complex and highly distributed Trigger and Data Acquisition (TDAQ) system, in charge of selecting only interesting data and transporting those to permanent mass storage (∼1 GB s) for later analysis. The data reduction is carried out in two stages: first, custom electronics performs an initial level of data rejection for each bunch crossing based on partial and localized information. Only data corresponding to collisions passing this stage of selection will be actually read-out from the on-detector electronics. Then, a large computer farm (∼17 k cores) analyses these data in real-time and decides which ones are worth being stored for Physics analysis. A large network allows moving the data from ∼2000 front-end buffers to the location where they are processed and from there to mass storage. The overall TDAQ system is embedded in a common software framework that allows controlling, configuring and monitoring the data taking process. The experience gained during the first period of data taking of the ATLAS experiment (Run I, 2010-2012) has inspired a number of ideas for improvement of the TDAQ system that are being put in place during the so-called Long Shutdown 1 of the Large Hadron Collider (LHC), in 2013 14. This paper summarizes the main changes that have been applied to the ATLAS TDAQ system and highlights the expected performance and functional improvements that will be available for the LHC Run II. Particular emphasis will be put on the evolution of the software-based data selection and of the flow of data in the system. The reasons for the modified architectural and technical choices will be explained, and details will be provided on the simulation and testing approach used to validate this system.
The top quark mass is measured using a template method in the
t
t
¯
→
lepton
+
jets
channel (lepton is
e
or
μ
) using ATLAS data recorded in 2012 at the LHC. The data were taken at a proton–proton ...centre-of-mass energy of
s
=
8
TeV
and correspond to an integrated luminosity of
20.2
fb
-
1
. The
t
t
¯
→
lepton
+
jets
channel is characterized by the presence of a charged lepton, a neutrino and four jets, two of which originate from bottom quarks (
b
). Exploiting a three-dimensional template technique, the top quark mass is determined together with a global jet energy scale factor and a relative
b
-to-light-jet energy scale factor. The mass of the top quark is measured to be
m
top
=
172.08
±
0.39
(
stat
)
±
0.82
(
syst
)
GeV
. A combination with previous ATLAS
m
top
measurements gives
m
top
=
172.69
±
0.25
(
stat
)
±
0.41
(
syst
)
GeV
.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
During the LHC Long Shutdown 1 (LSI) period, that started in 2013, the Simulation at Point1 (Sim@P1) project takes advantage, in an opportunistic way, of the TDAQ (Trigger and Data Acquisition) HLT ...(High-Level Trigger) farm of the ATLAS experiment. This farm provides more than 1300 compute nodes, which are particularly suited for running event generation and Monte Carlo production jobs that are mostly CPU and not I/O bound. It is capable of running up to 2700 Virtual Machines (VMs) each with 8 CPU cores, for a total of up to 22000 parallel jobs. This contribution gives a review of the design, the results, and the evolution of the Sim@P1 project, operating a large scale OpenStack based virtualized platform deployed on top of the ATLAS TDAQ HLT farm computing resources. During LS1, Sim@P1 was one of the most productive ATLAS sites: it delivered more than 33 million CPU-hours and it generated more than 1.1 billion Monte Carlo events. The design aspects are presented: the virtualization platform exploited by Sim@P1 avoids interferences with TDAQ operations and it guarantees the security and the usability of the ATLAS private network. The cloud mechanism allows the separation of the needed support on both infrastructural (hardware, virtualization layer) and logical (Grid site support) levels. This paper focuses on the operational aspects of such a large system during the upcoming LHC Run 2 period: simple, reliable, and efficient tools are needed to quickly switch from Sim@P1 to TDAQ mode and back, to exploit the resources when they are not used for the data acquisition, even for short periods. The evolution of the central OpenStack infrastructure is described, as it was upgraded from Folsom to the Icehouse release, including the scalability issues addressed.
Searches for non-resonant and resonant Higgs boson pair production are performed in the
γ
γ
W
W
∗
channel with the final state of
γ
γ
ℓ
ν
j
j
using 36.1
fb
-
1
of proton–proton collision data ...recorded at a centre-of-mass energy of
s
=
13 TeV by the ATLAS detector at the Large Hadron Collider. No significant deviation from the Standard Model prediction is observed. A 95% confidence-level observed upper limit of 7.7 pb is set on the cross section for non-resonant production, while the expected limit is 5.4 pb. A search for a narrow-width resonance
X
decaying to a pair of Standard Model Higgs bosons
HH
is performed with the same set of data, and the observed upper limits on
σ
(
p
p
→
X
)
×
B
(
X
→
H
H
)
range between 40.0 and 6.1 pb for masses of the resonance between 260 and 500 GeV, while the expected limits range between 17.6 and 4.4 pb. When deriving the limits above, the Standard Model branching ratios of the
H
→
γ
γ
and
H
→
W
W
∗
are assumed.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Measurements of fiducial integrated and differential cross sections for inclusive
W
+
,
W
-
and
Z
boson production are reported. They are based on
25.0
±
0.5
pb
-
1
of
pp
collision data at
s
=
5.02
... TeV collected with the ATLAS detector at the CERN Large Hadron Collider. Electron and muon decay channels are analysed, and the combined
W
+
,
W
-
and
Z
integrated cross sections are found to be
σ
W
+
=
2266
±
9
(stat)
±
29
(syst)
±
43
(lumi)
pb,
σ
W
-
=
1401
±
7
(stat)
±
18
(syst)
±
27
(lumi)
pb, and
σ
Z
=
374.5
±
3.4
(stat)
±
3.6
(syst)
±
7.0
(lumi)
pb, in good agreement with next-to-next-to-leading-order QCD cross-section calculations. These measurements serve as references for Pb+Pb interactions at the LHC at
s
NN
=
5.02
TeV.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
With the LHC collider at CERN currently going through the period of Long Shutdown 1 there is an opportunity to use the computing resources of the experiments' large trigger farms for other data ...processing activities. In the case of the ATLAS experiment, the TDAQ farm, consisting of more than 1500 compute nodes, is suitable for running Monte Carlo (MC) production jobs that are mostly CPU and not I/O bound. This contribution gives a thorough review of the design and deployment of a virtualized platform running on this computing resource and of its use to run large groups of CernVM based virtual machines operating as a single CERN-P1 WLCG site. This platform has been designed to guarantee the security and the usability of the ATLAS private network, and to minimize interference with TDAQ's usage of the farm. Openstack has been chosen to provide a cloud management layer. The experience gained in the last 3.5 months shows that the use of the TDAQ farm for the MC simulation contributes to the ATLAS data processing at the level of a large Tier-1 WLCG site, despite the opportunistic nature of the underlying computing resources being used.
The ATLAS experiment will upgrade its Trigger and Data Acquisition (TDAQ) system for the High Luminosity LHC (HL-LHC). The HL-LHC is expected to start operations in the middle of 2026, to ultimately ...reach a peak instantaneous luminosity of 7.5 x 10 34 cm -2 s -1 , corresponding to appioximately 200 inelastic proton-proton collisions per bunch crossing, and to deliver more than ten times the integrated luminosity of the LHC Runs 1-3 combined (up to 4000 fb -1 ). Meeting these requirements poses significant challenges to the TDAQ systems to fully exploit the physics potential of the HL-LHC.