Electron and photon triggers covering transverse energies from 5
GeV
to several
TeV
are essential for the ATLAS experiment to record signals for a wide variety of physics: from Standard Model ...processes to searches for new phenomena in both proton–proton and heavy-ion collisions. To cope with a fourfold increase of peak LHC luminosity from 2015 to 2018 (Run 2), to
2.1
×
10
34
cm
-
2
s
-
1
, and a similar increase in the number of interactions per beam-crossing to about 60, trigger algorithms and selections were optimised to control the rates while retaining a high efficiency for physics analyses. For proton–proton collisions, the single-electron trigger efficiency relative to a single-electron offline selection is at least 75% for an offline electron of 31
GeV
, and rises to 96% at 60
GeV
; the trigger efficiency of a 25
GeV
leg of the primary diphoton trigger relative to a tight offline photon selection is more than 96% for an offline photon of 30
GeV
. For heavy-ion collisions, the primary electron and photon trigger efficiencies relative to the corresponding standard offline selections are at least 84% and 95%, respectively, at 5
GeV
above the corresponding trigger threshold.
A
bstract
The factor of four increase in the LHC luminosity, from 0
.
5
×
10
34
cm
−
2
s
−
1
to 2
.
0
×
10
34
cm
−
2
s
−
1
, and the corresponding increase in pile-up collisions during the 2015–2018 ...data-taking period, presented a challenge for the ATLAS trigger, particularly for those algorithms that select events with missing transverse momentum. The output data rate at fixed threshold typically increases exponentially with the number of pile-up collisions, so the legacy algorithms from previous LHC data-taking periods had to be tuned and new approaches developed to maintain the high trigger efficiency achieved in earlier operations. A study of the trigger performance and comparisons with simulations show that these changes resulted in event selection efficiencies of
>
98% for this period, meeting and in some cases exceeding the performance of similar triggers in earlier run periods, while at the same time keeping the necessary bandwidth within acceptable limits.
With the increase in energy of the Large Hadron Collider to a centre-of-mass energy of 13
TeV
for Run 2, events with dense environments, such as in the cores of high-energy jets, became a focus for ...new physics searches as well as measurements of the Standard Model. These environments are characterized by charged-particle separations of the order of the tracking detectors sensor granularity. Basic track quantities are compared between 3.2 fb
-
1
of data collected by the ATLAS experiment and simulation of proton–proton collisions producing high-transverse-momentum jets at a centre-of-mass energy of 13
TeV
. The impact of charged-particle separations and multiplicities on the track reconstruction performance is discussed. The track reconstruction efficiency in the cores of jets with transverse momenta between 200 and 1600
GeV
is quantified using a novel, data-driven, method. The method uses the energy loss,
d
E
/
d
x
, to identify pixel clusters originating from two charged particles. Of the charged particles creating these clusters, the measured fraction that fail to be reconstructed is
0.061
±
0.006
(stat.)
±
0.014
(syst.)
and
0.093
±
0.017
(stat.)
±
0.021
(syst.)
for jet transverse momenta of 200–400
GeV
and 1400–1600
GeV
, respectively.
Several improvements to the ATLAS triggers used to identify jets containing
b
-hadrons (
b
-jets) were implemented for data-taking during Run 2 of the Large Hadron Collider from 2016 to 2018. These ...changes include reconfiguring the
b
-jet trigger software to improve primary-vertex finding and allow more stable running in conditions with high pile-up, and the implementation of the functionality needed to run sophisticated taggers used by the offline reconstruction in an online environment. These improvements yielded an order of magnitude better light-flavour jet rejection for the same
b
-jet identification efficiency compared to the performance in Run 1 (2011–2012). The efficiency to identify
b
-jets in the trigger, and the conditional efficiency for
b
-jets that satisfy offline
b
-tagging requirements to pass the trigger are also measured. Correction factors are derived to calibrate the
b
-tagging efficiency in simulation to match that observed in data. The associated systematic uncertainties are substantially smaller than in previous measurements. In addition,
b
-jet triggers were operated for the first time during heavy-ion data-taking, using dedicated triggers that were developed to identify semileptonic
b
-hadron decays by selecting events with geometrically overlapping muons and jets.
Alignment of the ATLAS Inner Detector in Run 2 Ferraz, V. Araujo; Balasubramanian, R.; Barnett, B. M. ...
The European physical journal. C, Particles and fields,
2020, Letnik:
80, Številka:
12
Journal Article
Recenzirano
Odprti dostop
The performance of the ATLAS Inner Detector alignment has been studied using
pp
collision data at
s
=
13
TeV
collected by the ATLAS experiment during Run 2 (2015–2018) of the Large Hadron Collider ...(LHC). The goal of the detector alignment is to determine the detector geometry as accurately as possible and correct for time-dependent movements. The Inner Detector alignment is based on the minimization of track-hit residuals in a sequence of hierarchical levels, from global mechanical assembly structures to local sensors. Subsequent levels have increasing numbers of degrees of freedom; in total there are almost 750,000. The alignment determines detector geometry on both short and long timescales, where short timescales describe movements within an LHC fill. The performance and possible track parameter biases originating from systematic detector deformations are evaluated. Momentum biases are studied using resonances decaying to muons or to electrons. The residual sagitta bias and momentum scale bias after alignment are reduced to less than
∼
0.1
TeV
-
1
and
0.9
×
10
-
3
, respectively. Impact parameter biases are also evaluated using tracks within jets.
The Tile Calorimeter is the hadron calorimeter covering the central region of the ATLAS experiment at the Large Hadron Collider. Approximately 10,000 photomultipliers collect light from scintillating ...tiles acting as the active material sandwiched between slabs of steel absorber. This paper gives an overview of the calorimeter’s performance during the years 2008–2012 using cosmic-ray muon events and proton–proton collision data at centre-of-mass energies of 7 and 8 TeV with a total integrated luminosity of nearly 30 fb
-
1
. The signal reconstruction methods, calibration systems as well as the detector operation status are presented. The energy and time calibration methods performed excellently, resulting in good stability of the calorimeter response under varying conditions during the LHC Run 1. Finally, the Tile Calorimeter response to isolated muons and hadrons as well as to jets from proton–proton collisions is presented. The results demonstrate excellent performance in accord with specifications mentioned in the Technical Design Report.
Constraints on selected mediator-based dark matter models and a scalar dark energy model using up to 37 fb₋1 $\sqrt{s}$ = 13 TeV pp collision data collected by the ATLAS detector at the LHC during ...2015-2016 are summarised in this paper. The conclusions of experimental searches in a variety of final states are interpreted in terms of a set of spin-1 and spin-0 single-mediator dark matter simplified models and a second set of models involving an extended Higgs sector plus an additional vector or pseudo-scalar mediator. The searches considered in this paper constrain spin-1 leptophobic and leptophilic mediators, spin-0 colour-neutral and colour-charged mediators and vector or pseudo-scalar mediators embedded in extended Higgs sector models. In this case, also $\sqrt{s}$ = 8 TeV pp collision data are used for the interpretation of the results. The results are also interpreted for the first time in terms of light scalar particles that could contribute to the accelerating expansion of the universe (dark energy).
A search for a pair of neutral, scalar bosons with each decaying into two W bosons is presented using 36.1 fb₋1 of proton-proton collision data at a centre-of-mass energy of 13 TeV recorded with the ...ATLAS detector at the Large Hadron Collider. This search uses three production models: non-resonant and resonant Higgs boson pair production and resonant production of a pair of heavy scalar particles. Three final states, classified by the number of leptons, are analysed: two same-sign leptons, three leptons, and four leptons. No significant excess over the expected Standard Model backgrounds is observed. An observed (expected) 95% confidence-level upper limit of 160 (120) times the Standard Model prediction of non-resonant Higgs boson pair production cross-section is set from a combined analysis of the three final states. Upper limits are set on the production cross-section times branching ratio of a heavy scalar X decaying into a Higgs boson pair in the mass range of 260 GeV ≤ mX ≤ 500 GeV and the observed (expected) limits range from 9.3 (10) pb to 2.8 (2.6) pb. Upper limits are set on the production cross-section times branching ratio of a heavy scalar X decaying into a pair of heavy scalars S for mass ranges of 280 GeV ≤ mX ≤ 340 GeV and 135 GeV ≤ mS ≤ 165 GeV and the observed (expected) limits range from 2.5 (2.5) pb to 0.16 (0.17) pb.
The accurate simulation of additional interactions at the ATLAS experiment for the analysis of proton–proton collisions delivered by the Large Hadron Collider presents a significant challenge to the ...computing resources. During the LHC Run 2 (2015–2018), there were up to 70 inelastic interactions per bunch crossing, which need to be accounted for in Monte Carlo (MC) production. In this document, a new method to account for these additional interactions in the simulation chain is described. Instead of sampling the inelastic interactions and adding their energy deposits to a hard-scatter interaction one-by-one, the inelastic interactions are presampled, independent of the hard scatter, and stored as combined events. Consequently, for each hard-scatter interaction, only one such presampled event needs to be added as part of the simulation chain. For the Run 2 simulation chain, with an average of 35 interactions per bunch crossing, this new method provides a substantial reduction in MC production CPU needs of around 20%, while reproducing the properties of the reconstructed quantities relevant for physics analyses with good accuracy.
Significance: Peripheral pitting edema is a clinician-administered measure for grading edema. Peripheral edema is graded 0, 1 + , 2 + , 3 + , or 4 + , but subjectivity is a major limitation ...of this technique. A pilot clinical study for short-wave infrared (SWIR) molecular chemical imaging (MCI) effectiveness as an objective, non-contact quantitative peripheral edema measure is underway.
Aim: We explore if SWIR MCI can differentiate populations with and without peripheral edema. Further, we evaluate the technology for correctly stratifying subjects with peripheral edema.
Approach: SWIR MCI of shins from healthy subjects and heart failure (HF) patients was performed. Partial least squares discriminant analysis (PLS-DA) was used to discriminate the two populations. PLS regression (PLSR) was applied to assess the ability of MCI to grade edema.
Results: Average spectra from edema exhibited higher water absorption than non-edema spectra. SWIR MCI differentiated healthy volunteers from a population representing all pitting edema grades with 97.1% accuracy (N = 103 shins). Additionally, SWIR MCI correctly classified shin pitting edema levels in patients with 81.6% accuracy.
Conclusions: Our study successfully achieved the two primary endpoints. Application of SWIR MCI to monitor patients while actively receiving HF treatment is necessary to validate SWIR MCI as an HF monitoring technology.