Insight into the electroweak (EW) and Higgs sectors can be achieved through measurements of vector boson scattering (VBS) processes. The scattering of EW bosons are rare processes that are precisely ...predicted in the Standard Model (SM) and are closely related to the Higgs mechanism. Modifications to VBS processes are also predicted in models of physics beyond the SM (BSM), for example through changes to the Higgs boson couplings to gauge bosons and the resonant production of new particles. In this review, experimental results and theoretical developments of VBS at the Large Hadron Collider, its high luminosity upgrade, and future colliders are presented.
The High Luminosity Large Hadron Collider (HL-LHC) at CERN will involve a significant increase in complexity and sheer size of data with respect to the current LHC experimental complex. Hence, the ...task of reconstructing the particle trajectories will become more involved due to the number of simultaneous collisions and the resulting increased detector occupancy. Aiming to identify the particle paths, machine learning techniques such as graph neural networks are being explored in the HEP.TrkX project and its successor, the Exa.TrkX project. Both show promising results and reduce the combinatorial nature of the problem. Previous results of our team have demonstrated the successful attempt of applying quantum graph neural networks to reconstruct the particle track based on the hits of the detector. A higher overall accuracy is gained by representing the training data in a meaningful way within an embedded space. That has been included in the Exa.TrkX project by applying a classical MLP. Consequently, pairs of hits belonging to different trajectories are pushed apart while those belonging to the same ones stay close together. We explore the applicability of variational quantum circuits that include a relatively low number of qubits applicable to NISQ devices within the task of embedding and show preliminary results.
Accurate determination of particle track reconstruction parameters will be a major challenge for the High Luminosity Large Hadron Collider (HL-LHC) experiments. The expected increase in the number of ...simultaneous collisions at the HL-LHC and the resulting high detector occupancy will make track reconstruction algorithms extremely demanding in terms of time and computing resources. The increase in number of hits will increase the complexity of track reconstruction algorithms. In addition, the ambiguity in assigning hits to particle tracks will be increased due to the finite resolution of the detector and the physical “closeness” of the hits. Thus, the reconstruction of charged particle tracks will be a major challenge to the correct interpretation of the HL-LHC data. Most methods currently in use are based on Kalman filters which are shown to be robust and to provide good physics performance. However, they are expected to scale worse than quadratically. Designing an algorithm capable of reducing the combinatorial background at the hit level, would provide a much “cleaner” initial seed to the Kalman filter, strongly reducing the total processing time. One of the salient features of Quantum Computers is the ability to evaluate a very large number of states simultaneously, making them an ideal instrument for searches in a large parameter space. In fact, different R&D initiatives are exploring how Quantum Tracking Algorithms could leverage such capabilities. In this paper, we present our work on the implementation of a quantum-based track finding algorithm aimed at reducing combinatorial background during the initial seeding stage. We use the publicly available dataset designed for the kaggle TrackML challenge.
The Large Hadron Collider (LHC) at the European Organisation for Nuclear Research (CERN) will be upgraded to further increase the instantaneous rate of particle collisions (luminosity) and become the ...High Luminosity LHC (HL-LHC). This increase in luminosity will significantly increase the number of particles interacting with the detector. The interaction of particles with a detector is referred to as “hit”. The HL-LHC will yield many more detector hits, which will pose a combinatorial challenge by using reconstruction algorithms to determine particle trajectories from those hits. This work explores the possibility of converting a novel graph neural network model, that can optimally take into account the sparse nature of the tracking detector data and their complex geometry, to a hybrid quantum-classical graph neural network that benefits from using variational quantum layers. We show that this hybrid model can perform similar to the classical approach. Also, we explore parametrized quantum circuits (PQC) with different expressibility and entangling capacities, and compare their training performance in order to quantify the expected benefits. These results can be used to build a future road map to further develop circuit-based hybrid quantum-classical graph neural networks.
In the last year ATLAS has radically updated its software development infrastructure hugely reducing the complexity of building releases and greatly improving build speed, flexibility and code ...testing. The first step in this transition was the adoption of CMake as the software build system over the older CMT. This required the development of an automated translation from the old system to the new, followed by extensive testing and improvements. This resulted in a far more standard build process that was married to the method of building ATLAS software as a series of 12 separate projects from Subversion. We then proceeded with a migration of the code base from Subversion to Git. As the Subversion repository had been structured to manage each package more or less independently there was no simple mapping that could be used to manage the migration into Git. Instead a specialist set of scripts that captured the software changes across official software releases was developed. With some clean up of the repository and the policy of only migrating packages in production releases, we managed to reduce the repository size from 62 GiB to 220 MiB. After moving to Git we took the opportunity to introduce continuous integration so that now each code change from developers is built and tested before being approved. With both CMake and Git in place we also dramatically simplified the build management of ATLAS software. Many heavyweight homegrown tools were dropped and the build procedure was reduced to a single bootstrap of some external packages, followed by a full build of the rest of the stack. This has reduced the time for a build by a factor of 2. It is now easy to build ATLAS software, freeing developers to test compile intrusive changes or new platform ports with ease. We have also developed a system to build lightweight ATLAS releases, for simulation, analysis or physics derivations which can be built from the same branch.
Run-2 of the Large Hadron Collider (LHC) will provide new challenges to track and vertex reconstruction with higher energies, denser jets and higher rates. Therefore the ATLAS experiment has ...constructed the first 4-layer Pixel Detector in HEP, installing a new pixel layer, also called Insertable B-Layer (IBL). The IBL is a fourth layer of pixel detectors, and has been installed in May 2014 at a radius of 3.3 cm between the existing Pixel Detector and a new smaller radius beam-pipe. The new detector, built to cope with the high radiation and expected occupancy, is the first large scale application of 3D sensors and CMOS 130~nm readout electronics. In addition, the Pixel Detector was improved with a new service quarter panel to recover about 3\% of defective modules lost during Run-1 and a new optical readout system to readout the data at higher speed while reducing the occupancy when running with increased luminosity. Complementing detector improvements, many improvements to Inner Detector track and vertex reconstruction were developed during the two-year shutdown of the LHC. These include novel techniques developed to improve the performance in the dense cores of jets, optimisation for the expected conditions, and a software campaign which lead to a factor of three decrease in the CPU time needed to process each recorded event.
We study rare processes of the standard model of particle physics (SM) in events with missing transverse energy (special characters omitted), no leptons, and two or three jets, of which at least one ...is identified as originating from a b-quark (special characters omitted+b-jets signature). We present a search for the SM Higgs boson produced in association with a W or Z boson when the Higgs decays into bb¯. We consider the scenario where Z → νν, or W → lν and the lepton escapes detection. This dissertation analyzes 7.8 fb–1 of data collected by the CDF II experiment at Fermilab. For the first time, we analyze events with relaxed kinematic requirements, yielding an increase of 30-40% in acceptance to the WH/ZH signal. We collect events from three different triggers and parametrize the efficiency of their logical combination (OR) using a novel artificial neural network (NN) technique. To increase the sensitivity to the signal, we implement a NN to remove the huge instrumental background. An additional NN is used to discriminate the Higgs signal from the remaining background. We check our background modeling by comparing data against backgrounds in many control regions, and find good agreement. Observing no significant excess in the data, we place 95% confidence level (C.L.) upper limits on the Higgs boson production cross section. For a mass of 115 GeV/c 2 the expected (observed) limit is 2.9 (2.3) times the standard model prediction. Compared to the last iteration of this analysis, this result improves the significance by 10% throughout the 100–150 GeV/c 2 mass range. This is one of the most sensitive at the Tevatron in this mass range. We cross-check the tools developed in this dissertation by measuring the cross-section of top pair, electroweak single top and diboson ( WZ + ZZ) production.
We study the sensitivity to longitudinal vector boson scattering at a 27, 50
and 100 TeV $pp$ collider using events containing two leptonically-decaying
same-electric-charge $W$ bosons produced in ...association with two jets.