Abstract
A detailed geometry description is essential to any high quality track reconstruction application. In current C++ based track reconstruction software libraries this is often achieved by an ...object oriented, polymorphic geometry description that implements different shapes and objects by extending a common base class. Such a design, however, has been shown to be problematic when attempting to adapt these applications to run on heterogeneous computing hardware, particularly on hardware accelerators. We present detray, a compile time polymorphic and yet accurate track reconstruction geometry description which is part of the ACTS parallelization R&D effort. detray is built as an index based geometry description with a shallow memory layout, that uses variadic template programming to allow custom shapes and intersection algorithms rather than inheritance from abstract base classes. It is designed to serve as a potential geometry and navigation backend for ACTS and as such implements the ACTS navigation model of boundary portals and purely surface based geometric entities. detray is designed to work with a dedicated memory management library and thus can be instantiated as a geometry model in host and device code.
Reconstruction of charged particles' trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of ...proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is developed with special emphasis on thread-safety to support parallel execution of the code and data structures are optimised for vectorisation to speed up linear algebra operations. The implementation is agnostic to the details of the detection technologies and magnetic field configuration which makes it applicable to many different experiments.
The new ATLAS track reconstruction (NEWT) Cornelissen, T; Elsing, M; Gavrilenko, I ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The track reconstruction of modern high energy physics experiments is a very complex task that puts stringent requirements onto the software realisation. The ATLAS track reconstruction software has ...been in the past dominated by a collection of individual packages, each of which incorporating a different intrinsic event data model, different data flow sequences and calibration data. Recently, the ATLAS track reconstruction has undergone a major design revolution to ensure maintainability during the long lifetime of the ATLAS experiment and the flexibility needed for the startup phase. The entire software chain has been re-organised in modular components and a common event data model has been deployed. A complete new track reconstruction that concentrates on common tools aimed to be used by both ATLAS tracking devices, the Inner Detector and the Muon System, has been established. It has been already used during many large scale tests with data from Monte Carlo simulation and from detector commissioning projects such as the combined test beam 2004 and cosmic ray events. This document concentrates on the technical and conceptual details of the newly developed track reconstruction.
A search for the dimuon decay of the Standard Model (SM) Higgs boson is performed using data corresponding to an integrated luminosity of 139 fb−1 collected with the ATLAS detector in Run 2 pp ...collisions at s=13 TeV at the Large Hadron Collider. The observed (expected) significance over the background-only hypothesis for a Higgs boson with a mass of 125.09 GeV is 2.0σ (1.7σ). The observed upper limit on the cross section times branching ratio for pp→H→μμ is 2.2 times the SM prediction at 95% confidence level, while the expected limit on a H→μμ signal assuming the absence (presence) of a SM signal is 1.1 (2.0). The best-fit value of the signal strength parameter, defined as the ratio of the observed signal yield to the one expected in the SM, is μ=1.2±0.6.
The observation of Higgs boson production in association with a top quark pair (tt¯H), based on the analysis of proton–proton collision data at a centre-of-mass energy of 13 TeV recorded with the ...ATLAS detector at the Large Hadron Collider, is presented. Using data corresponding to integrated luminosities of up to 79.8 fb−1, and considering Higgs boson decays into bb¯, WW⁎, τ+τ−, γγ, and ZZ⁎, the observed significance is 5.8 standard deviations, compared to an expectation of 4.9 standard deviations. Combined with the tt¯H searches using a dataset corresponding to integrated luminosities of 4.5 fb−1 at 7 TeV and 20.3 fb−1 at 8 TeV, the observed (expected) significance is 6.3 (5.1) standard deviations. Assuming Standard Model branching fractions, the total tt¯H production cross section at 13 TeV is measured to be 670 ± 90 (stat.) −100+110 (syst.) fb, in agreement with the Standard Model prediction.
FCC Physics Opportunities Altmannshofer, W.; Arsenyev, S. A.; Aune, S. ...
The European physical journal. C, Particles and fields,
2019, Letnik:
79, Številka:
6
Journal Article, Publication
Recenzirano
Odprti dostop
We review the physics opportunities of the Future Circular Collider, covering its e
+
e
-
, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing ...the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.
This article documents the muon reconstruction and identification efficiency obtained by the ATLAS experiment for 139
fb
-
1
of
pp
collision data at
s
=
13
TeV collected between 2015 and 2018 during ...Run 2 of the LHC. The increased instantaneous luminosity delivered by the LHC over this period required a reoptimisation of the criteria for the identification of prompt muons. Improved and newly developed algorithms were deployed to preserve high muon identification efficiency with a low misidentification rate and good momentum resolution. The availability of large samples of
Z
→
μ
μ
and
J
/
ψ
→
μ
μ
decays, and the minimisation of systematic uncertainties, allows the efficiencies of criteria for muon identification, primary vertex association, and isolation to be measured with an accuracy at the per-mille level in the bulk of the phase space, and up to the percent level in complex kinematic configurations. Excellent performance is achieved over a range of transverse momenta from 3 GeV to several hundred GeV, and across the full muon detector acceptance of
|
η
|
<
2.7
.
In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper ...we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.
A search for the electroweak production of charginos and sleptons decaying into final states with two electrons or muons is presented. The analysis is based on 139 fb−1 of proton–proton collisions ...recorded by the ATLAS detector at the Large Hadron Collider at s√=13 TeV. Three R-parity-conserving scenarios where the lightest neutralino is the lightest supersymmetric particle are considered: the production of chargino pairs with decays via either W bosons or sleptons, and the direct production of slepton pairs. The analysis is optimised for the first of these scenarios, but the results are also interpreted in the others. No significant deviations from the Standard Model expectations are observed and limits at 95% confidence level are set on the masses of relevant supersymmetric particles in each of the scenarios. For a massless lightest neutralino, masses up to 420 GeV are excluded for the production of the lightest-chargino pairs assuming W-boson-mediated decays and up to 1 TeV for slepton-mediated decays, whereas for slepton-pair production masses up to 700 GeV are excluded assuming three generations of mass-degenerate sleptons.
The algorithms used by the ATLAS Collaboration during Run 2 of the Large Hadron Collider to identify jets containing
b
-hadrons are presented. The performance of the algorithms is evaluated in the ...simulation and the efficiency with which these algorithms identify jets containing
b
-hadrons is measured in collision data. The measurement uses a likelihood-based method in a sample highly enriched in
t
t
¯
events. The topology of the
t
→
W
b
decays is exploited to simultaneously measure both the jet flavour composition of the sample and the efficiency in a transverse momentum range from 20 to 600 GeV. The efficiency measurement is subsequently compared with that predicted by the simulation. The data used in this measurement, corresponding to a total integrated luminosity of 80.5
fb
-
1
, were collected in proton–proton collisions during the years 2015–2017 at a centre-of-mass energy
s
=
13 TeV. By simultaneously extracting both the efficiency and jet flavour composition, this measurement significantly improves the precision compared to previous results, with uncertainties ranging from 1 to 8% depending on the jet transverse momentum.