Stitching Monte Carlo samples Ehatäht, Karl; Veelken, Christian
The European physical journal. C, Particles and fields,
05/2022, Letnik:
82, Številka:
5
Journal Article
Recenzirano
Odprti dostop
Monte Carlo (MC) simulations are extensively used for various purposes in modern high-energy physics (HEP) experiments. Precision measurements of established Standard Model processes or searches for ...new physics often require the collection of vast amounts of data. It is often difficult to produce MC samples containing an adequate number of events to allow for a meaningful comparison with the data, as substantial computing resources are required to produce and store such samples. One solution often employed when producing MC samples for HEP experiments is to partition the phase space of particle interactions into multiple regions and produce the MC samples separately for each region. This approach allows to adapt the size of the MC samples to the needs of physics analyses that are performed in these regions. In this paper we present a procedure for combining MC samples that overlap in phase space. The procedure is based on applying suitably chosen weights to the simulated events. We refer to the procedure as “stitching”. The paper includes different examples for applying the procedure to simulated proton-proton collisions at the CERN Large Hadron Collider.
The analysis of vast amounts of data constitutes a major challenge in modern high energy physics experiments. Machine learning (ML) methods, typically trained on simulated data, are often employed to ...facilitate this task. Several choices need to be made by the user when training the ML algorithm. In addition to deciding which ML algorithm to use and choosing suitable observables as inputs, users typically need to choose among a plethora of algorithm-specific parameters. We refer to parameters that need to be chosen by the user as hyperparameters. These are to be distinguished from parameters that the ML algorithm learns autonomously during the training, without intervention by the user. The choice of hyperparameters is conventionally done manually by the user and often has a significant impact on the performance of the ML algorithm. In this paper, we explore two evolutionary algorithms: particle swarm optimization and genetic algorithm, for the purposes of performing the choice of optimal hyperparameter values in an autonomous manner. Both of these algorithms will be tested on different datasets and compared to alternative methods.
The polarimeter vector of the τ represents an optimal observable for the measurement of the τ spin. In this paper we present an algorithm for the computation of the τ polarimeter vector for the decay ...channels τ−→π−π+π−ντ and τ−→π−π0π0ντ. The algorithm is based on a model for the hadronic current in these decay channels, which was fitted to data recorded by the CLEO experiment 1.
Program Title: PolarimetricVectorTau2a1, version 1.0.1
CPC Library link to program files:https://doi.org/10.17632/z986tk5pyv.1
Developer's repository link:https://github.com/TTauSpin/PolarimetricVectorTau2a1
Licensing provisions: MIT
Programming language: C++ 11
Nature of problem: The polarimeter vector h of the τ can be used to measure the τ spin orientation. The vector h is a function of the momenta of the particles produced in the τ decay and needs to be computed in the restframe of the τ lepton. While for the decay channels τ−→π−ντ and τ−→π−π0ντ expressions for h exist in the literature, no corresponding expressions exist for the channels τ−→π−π+π−ντ and τ−→π−π0π0ντ.
Solution method: In this paper, we present an algorithm for the computation of the τ polarimeter vector h for the decay channels τ−→π−π+π−ντ and τ−→π−π0π0ντ. The algorithm is based on a model for the dynamics of hadronic interactions in these decay channels. The parameters of the model have been determined by a fit to data recorded by the CLEO experiment.
When using machine learning (ML) techniques, users typically need to choose a plethora of algorithm-specific parameters, referred to as hyperparameters. In this paper, we compare the performance of ...two algorithms, particle swarm optimisation (PSO) and Bayesian optimisation (BO), for the autonomous determination of these hyperparameters in applications to different ML tasks typical for the field of high energy physics (HEP). Our evaluation of the performance includes a comparison of the capability of the PSO and BO algorithms to make efficient use of the highly parallel computing resources that are characteristic of contemporary HEP experiments.
We apply the matrix element method (MEM) to the search for non-resonant Higgs boson pair (HH) production in the channel HH → bb̄WW* at the LHC and study the separation between the HH signal and the ...large irreducible background, which arises from the production of top quark pairs (tt̄). Our study focuses on events containing two leptons (electrons or muons) in the final state. The separation between signal and background is studied for experimental conditions characteristic for the ATLAS and CMS experiments during LHC Run 2, using the DELPHES fast-simulation package. We find that the tt̄ background can be reduced to a level of 0.26% for a signal efficiency of 35%.
Searches for charged and neutral Higgs bosons in the context of the Minimal Supersymmetric Standard Model (MSSM) and in the context of the Next–to–Minimal Supersymmetric Standard Model (NMSSM) are ...presented. The analyses are based on proton–proton collision data recorded by the CMS experiment at s=7 TeV and s=8 TeV center–of–mass energy in 2011 and 2012, respectively, corresponding to integrated luminosities of up to 4.9 fb-1 and 20.7 fb-1. No evidence for further Higgs bosons in addition to the discovered SM–like Higgs boson of mass ≈ 125 GeV is found and stringent exclusion limits are derived.
Identifying and reconstructing hadronic τ decays (τh) is an important task at current and future high-energy physics experiments, as τh represent an important tool to analyze the production of Higgs ...and electroweak bosons as well as to search for physics beyond the Standard Model. The identification of τh can be viewed as a generalization and extension of jet-flavour tagging, which has in the recent years undergone significant progress due to the use of deep learning. Based on a granular simulation with realistic detector effects and a particle flow-based event reconstruction, we show in this paper that deep learning-based jet-flavour-tagging algorithms are powerful τh identifiers. Specifically, we show that jet-flavour-tagging algorithms such as LorentzNet and ParticleTransformer can be adapted in an end-to-end fashion for discriminating τh from quark and gluon jets. We find that the end-to-end transformer-based approach significantly outperforms contemporary state-of-the-art τh reconstruction and identification algorithms currently in use at the Large Hadron Collider.
An algorithm for reconstruction of the Higgs mass in H → ττ decays is presented. The algorithm computes for each event a likelihood function P(Mττ) which quantifies the level of compatibility of a ...Higgs mass hypothesis Mττ with measured momenta of the visible tau decay products plus the missing transverse energy reconstructed in the event. The algorithm is used in the CMS H → ττ analysis, where it is found to improve the sensitivity to discover the Standard Model Higgs boson in this decay channel by about 30%.
We present an algorithm for the reconstruction of the Higgs mass in events with Higgs bosons decaying into a pair of τ leptons. The algorithm is based on matrix element (ME) techniques and achieves a ...relative resolution on the Higgs boson mass of typically 15–20%. A previous version of the algorithm has been used in analyses of Higgs boson production performed by the CMS collaboration during LHC Run 1. The algorithm is described in detail and its performance on simulated events is assessed. The development of techniques to handle τ decays in the ME formalism represents an important result of this paper.