Abstract
The Exa.TrkX project presents a graph neural network (GNN) technique for low-level reconstruction of neutrino interactions in a Liquid Argon Time Projection Chamber (LArTPC). GNNs are still ...a relatively novel technique, and have shown great promise for similar reconstruction tasks in the Large Hadron Collider (LHC). Graphs describing particle interactions are formed by treating each detector hit as a node, with edges describing the relationships between hits. We utilise a multi-head attention message passing network which performs graph convolutions in order to label each node with a particle type.
We present an updated variant of our GNN architecture, with several improvements. After testing the model on more realistic simulation with regions of unresponsive wires, the target was modified from edge classification to node classification in order to increase robustness. Removing edges as a classification target opens up a broader possibility space for edge-forming techniques; we explore the model’s performance across a variety of approaches, such as Delaunay triangulation, kNN, and radius-based methods. We also extend this model to the 3D context, sharing information between detector views. By using reconstructed 3D spacepoints to map detector hits from each wire plane, the model naively constructs 2D representations that are independent yet fully consistent.
The ATLAS Event Service (ES) implements a new fine grained approach to HEP event processing, designed to be agile and efficient in exploiting transient, short-lived resources such as HPC ...hole-filling, spot market commercial clouds, and volunteer computing. Input and output control and data flows, bookkeeping, monitoring, and data storage are all managed at the event level in an implementation capable of supporting ATLAS-scale distributed processing throughputs (about 4M CPU-hours day). Input data flows utilize remote data repositories with no data locality or pre-staging requirements, minimizing the use of costly storage in favor of strongly leveraging powerful networks. Object stores provide a highly scalable means of remotely storing the quasi-continuous, fine grained outputs that give ES based applications a very light data footprint on a processing resource, and ensure negligible losses should the resource suddenly vanish. We will describe the motivations for the ES system, its unique features and capabilities, its architecture and the highly scalable tools and technologies employed in its implementation, and its applications in ATLAS processing on HPCs, commercial cloud resources, volunteer computing, and grid resources. Notice: This manuscript has been authored by employees of Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy. The publisher by accepting the manuscript for publication acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.
Following the advent of a post-Moore's law field of computation, novel architectures continue to emerge. With composite, multi-million connection neuromorphic chips like IBM's TrueNorth, neural ...engineering has now become a feasible technology in this novel computing paradigm. High Energy Physics experiments are continuously exploring new methods of computation and data handling, including neuromorphic, to support the growing challenges of the field and be prepared for future commodity computing trends. This work details the first instance of a Kalman filter implementation in IBM's neuromorphic architecture, TrueNorth, for both parallel and serial spike trains. The implementation is tested on multiple simulated systems and its performance is evaluated with respect to an equivalent non-spiking Kalman filter. The limits of the implementation are explored whilst varying the size of weight and threshold registers, the number of spikes used to encode a state, size of neuron block for spatial encoding, and neuron potential reset schemes.
The ATLAS experiment has successfully used its Gaudi Athena software framework for data taking and analysis during the first LHC run, with billions of events successfully processed. However, the ...design of Gaudi Athena dates from early 2000 and the software and the physics code has been written using a single threaded, serial design. This programming model has increasing difficulty in exploiting the potential of current CPUs, which offer their best performance only through taking full advantage of multiple cores and wide vector registers. Future CPU evolution will intensify this trend, with core counts increasing and memory per core falling. With current memory consumption for 64 bit ATLAS reconstruction in a high luminosity environment approaching 4GB, it will become impossible to fully occupy all cores in a machine without exhausting available memory. However, since maximizing performance per watt will be a key metric, a mechanism must be found to use all cores as efficiently as possible. In this paper we report on our progress with a practical demonstration of the use of multithreading in the ATLAS reconstruction software, using the GaudiHive framework. We have expanded support to Calorimeter, Inner Detector, and Tracking code, discussing what changes were necessary in order to allow the serially designed ATLAS code to run, both to the framework and to the tools and algorithms used. We report on both the performance gains, and what general lessons were learned about the code patterns that had been employed in the software and which patterns were identified as particularly problematic for multi-threading. We also present our findings on implementing a hybrid multi-threaded multi-process framework, to take advantage of the strengths of each type of concurrency, while avoiding some of their corresponding limitations.
The ATLAS Event Service (AES) has been designed and implemented for efficient running of ATLAS production workflows on a variety of computing platforms, ranging from conventional Grid sites to ...opportunistic, often short-lived resources, such as spot market commercial clouds, supercomputers and volunteer computing. The Event Service architecture allows real time delivery of fine grained workloads to running payload applications which process dispatched events or event ranges and immediately stream the outputs to highly scalable Object Stores. Thanks to its agile and flexible architecture the AES is currently being used by grid sites for assigning low priority workloads to otherwise idle computing resources; similarly harvesting HPC resources in an efficient back-fill mode; and massively scaling out to the 50-100k concurrent core level on the Amazon spot market to efficiently utilize those transient resources for peak production needs. Platform ports in development include ATLAS@Home (BOINC) and the Google Compute Engine, and a growing number of HPC platforms. After briefly reviewing the concept and the architecture of the Event Service, we will report the status and experience gained in AES commissioning and production operations on supercomputers, and our plans for extending ES application beyond Geant4 simulation to other workflows, such as reconstruction and data analysis.
ATLAS's current software framework, Gaudi Athena, has been very successful for the experiment in LHC Runs 1 and 2. However, its single-threaded design has been recognised for some time to be ...increasingly problematic as CPUs have increased core counts and decreased available memory per core. Even the multi-process version of Athena, AthenaMP, will not scale to the range of architectures we expect to use beyond Run2. ATLAS examined the requirements on an updated multi-threaded framework and laid out plans for a new framework, including better support for High Level Trigger use cases, in 2014. In this paper we report on our progress in developing the new multi-threaded task parallel extension of Athena, AthenaMT. Implementing AthenaMT has required many significant code changes. Progress has been made in updating key concepts of the framework, allowing different levels of thread safety in algorithmic code. Substantial advances have also been made in implementing a data flow centric design, as well as on the development of the new 'event views' infrastructure. These event views support partial event processing and are an essential component to support the High Level Trigger's processing of certain regions of interest. A major effort has also been invested to have an early version of AthenaMT that can run simulation on many core architectures, which has augmented the understanding gained from work on earlier ATLAS demonstrators.
The mass of the Higgs boson is measured in the H→ZZ⁎→4ℓ and in the H→γγ decay channels with 36.1 fb−1 of proton–proton collision data from the Large Hadron Collider at a centre-of-mass energy of 13 ...TeV recorded by the ATLAS detector in 2015 and 2016. The measured value in the H→ZZ⁎→4ℓ channel is mHZZ⁎=124.79±0.37GeV, while the measured value in the H→γγ channel is mHγγ=124.93±0.40GeV. Combining these results with the ATLAS measurement based on 7 and 8 TeV proton–proton collision data yields a Higgs boson mass of mH=124.97±0.24GeV.
The standard model of particle physics
describes the known fundamental particles and forces that make up our Universe, with the exception of gravity. One of the central features of the standard model ...is a field that permeates all of space and interacts with fundamental particles
. The quantum excitation of this field, known as the Higgs field, manifests itself as the Higgs boson, the only fundamental particle with no spin. In 2012, a particle with properties consistent with the Higgs boson of the standard model was observed by the ATLAS and CMS experiments at the Large Hadron Collider at CERN
. Since then, more than 30 times as many Higgs bosons have been recorded by the ATLAS experiment, enabling much more precise measurements and new tests of the theory. Here, on the basis of this larger dataset, we combine an unprecedented number of production and decay processes of the Higgs boson to scrutinize its interactions with elementary particles. Interactions with gluons, photons, and W and Z bosons-the carriers of the strong, electromagnetic and weak forces-are studied in detail. Interactions with three third-generation matter particles (bottom (b) and top (t) quarks, and tau leptons (τ)) are well measured and indications of interactions with a second-generation particle (muons, μ) are emerging. These tests reveal that the Higgs boson discovered ten years ago is remarkably consistent with the predictions of the theory and provide stringent constraints on many models of new phenomena beyond the standard model.
The NA48 experiment at CERN has performed a new measurement of direct CP violation, based on data taken in 1997 by simultaneously collecting
K
L
and
K
S
decays into
π
0
π
0 and
π
+
π
−. The result ...for the CP violating parameter
Re
(ε
′/ε)
is (18.5±4.5(stat)±5.8(syst))×10
−4.
Jet substructure observables have significantly extended the search program for physics beyond the standard model at the Large Hadron Collider. The state-of-the-art tools have been motivated by ...theoretical calculations, but there has never been a direct comparison between data and calculations of jet substructure observables that are accurate beyond leading-logarithm approximation. Such observables are significant not only for probing the collinear regime of QCD that is largely unexplored at a hadron collider, but also for improving the understanding of jet substructure properties that are used in many studies at the Large Hadron Collider. This Letter documents a measurement of the first jet substructure quantity at a hadron collider to be calculated at next-to-next-to-leading-logarithm accuracy. The normalized, differential cross section is measured as a function of log10ρ2, where ρ is the ratio of the soft-drop mass to the ungroomed jet transverse momentum. This quantity is measured in dijet events from 32.9 fb−1 of s=13 TeV proton-proton collisions recorded by the ATLAS detector. The data are unfolded to correct for detector effects and compared to precise QCD calculations and leading-logarithm particle-level Monte Carlo simulations.