The NeuroBayes neural network package Feindt, M.; Kerzel, U.
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
04/2006, Letnik:
559, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Detailed analysis of correlated data plays a vital role in modern analyses. We present a sophisticated neural network package based on Bayesian statistics which can be used for both classification ...and event-by-event prediction of the complete probability density distribution for continuous quantities. The network provides numerous possibilities to automatically preprocess the input variables and uses advanced regularisation and pruning techniques to essentially eliminate the risk of overtraining. Examples from physics and industry are given.
We describe a new B-meson full reconstruction algorithm designed for the Belle experiment at the B-factory KEKB, an asymmetric e
+e
− collider that collected a data sample of 771.6×10
6
B
B
¯
pairs ...during its running time. To maximize the number of reconstructed B decay channels, it utilizes a hierarchical reconstruction procedure and probabilistic calculus instead of classical selection cuts. The multivariate analysis package NeuroBayes was used extensively to hold the balance between highest possible efficiency, robustness and acceptable consumption of CPU time.
In total, 1104 exclusive decay channels were reconstructed, employing 71 neural networks altogether. Overall, we correctly reconstruct one B
± or B
0 candidate in 0.28% or 0.18% of the
B
B
¯
events, respectively. Compared to the cut-based classical reconstruction algorithm used at the Belle experiment, this is an improvement in efficiency by roughly a factor of 2, depending on the analysis considered.
The new framework also features the ability to choose the desired purity or efficiency of the fully reconstructed sample freely. If the same purity as for the classical full reconstruction code is desired (
∼
25
%
), the efficiency is still larger by nearly a factor of 2. If, on the other hand, the efficiency is chosen at a similar level as the classical full reconstruction, the purity rises from
∼
25
%
to nearly 90%.
► A method for full reconstruction of B-mesons was built. ► A hierarchical model with multivariate techniques in each step was used. ► Instead of cuts, probabilities were calculated and feed to higher stages. ► Compared to cut-based methods we achieved a factor of 2 in efficiency.
We describe an algorithm to quantify dependence in a multivariate data set. The algorithm is able to identify any linear and non-linear dependence in the data set by performing a hypothesis test for ...two variables being independent. As a result we obtain a reliable measure of dependence.
In high energy physics understanding dependencies is especially important in multidimensional maximum likelihood analyses. We therefore describe the problem of a multidimensional maximum likelihood analysis applied on a multivariate data set with variables that are dependent on each other. We review common procedures used in high energy physics and show that general dependence is not the same as linear correlation and discuss their limitations in practical application.
Finally we present the tool CAT, which is able to perform all reviewed methods in a fully automatic mode and creates an analysis report document with numeric results and visual review.
The impending Upgrade of the Belle experiment is expected to increase the generated data set by a factor of 50. This means that for the planned pixel detector, which is the closest to the interaction ...point, the data rates are going to increase to up to 28 Gbit s. Combined with data generated by the other detectors, this rate is too big to efficiently send out to offline processing. In order to reduce the data rates online data reduction schemes, in which background is detected and rejected, are going to be employed. In this paper, an approach for efficient online data reduction for the planned pixel detector of Belle-II is presented. Its central part is the NeuroBayes algorithm, which is based on multivariate analysis. It allows the identification of signal and background by analyzing clusters of hits in the pixel detector on FPGAs. The algorithm is leveraging the fact that hits of signal particles can have very different characteristics, compared to background, when passing through the pixel detector. The applicability and advantages in performance are shown through the D* decay. In Belle-II, these decays produce pions with such a small transversal momentum, that they barely escape the pixel detector itself. In a common approach like an extrapolation of tracks from outer detectors to RoIs, these pions are simply lost, since they do not reach all necessary layers of the detector. However, cluster analysis is able to identify and separate these pions from the background, thus keeping their data. For that characteristics of corresponding hits, like the total amount of charge deposited in the pixels, are used for separation. The capability for effective data reduction is underlined by a background reduction of at least 90% and signal efficiency of 95%, for slow pions. An implementation of the algorithm for usage on Virtex-6 FPGAs that are used at the pixel detector was performed. It is shown that the resulting implementation succeeds in replicating the efficiency of the algorithm, implemented in software while throughputs that suffice hard real-time constraints, set by the read-out system of Belle-II, are achieved and efficient use of the resources present on the FPGA is made.
We present a software framework for Belle II that reconstructs B mesons in many decay modes with minimal user intervention. It does so by reconstructing particles in user-supplied decay channels, and ...then in turn using these reconstructed particles in higher-level decays. This hierarchical reconstruction allows one to cover a relatively high fraction of all B decays by specifying a limited number of particle decays. Multivariate classification methods are used to achieve a high signal-to-background ratio in each individual channel. The entire reconstruction, including the application of pre-cuts and classifier trainings, is automated to a high degree and will allow users to retrain to account for analysis-specific signal-side selections.
We present the concept of a track trigger for the Belle II experiment, based on a neural network approach, that is able to reconstruct the z (longitudinal) position of the event vertex within the ...latency of the first level trigger. The trigger will thus be able to suppress a large fraction of the dominating background from events outside of the interaction region. The trigger uses the drift time information of the hits from the Central Drift Chamber (CDC) of Belle II within narrow cones in polar and azimuthal angle as well as in transverse momentum (sectors), and estimates the z-vertex without explicit track reconstruction. The preprocessing for the track trigger is based on the track information provided by the standard CDC trigger. It takes input from the 2D (r - ) track finder, adds information from the stereo wires of the CDC, and finds the appropriate sectors in the CDC for each track in a given event. Within each sector, the z-vertex of the associated track is estimated by a specialized neural network, with a continuous output corresponding to the scaled z-vertex. The input values for the neural network are calculated from the wire hits of the CDC.
Multivariate analysis (MVA) methods, especially discrimination techniques such as neural networks, are key ingredients in modern data analysis and play an important role in high energy physics. They ...are usually trained on simulated Monte Carlo (MC) samples to discriminate so called "signal" from "background" events and are then applied to data to select real events of signal type. We here address procedures that improve this work flow. This will be the enhancement of data / MC agreement by reweighting MC samples on a per event basis. Then training MVAs on real data using the sPlot technique will be discussed. Finally we will address the construction of MVAs whose discriminator is independent of a certain control variable, i.e. cuts on this variable will not change the discriminator shape.
The Full Event Interpretation Keck, T.; Abudinén, F.; Bernlochner, Florian U. ...
Computing and software for big science,
12/2019, Letnik:
3, Številka:
1
Journal Article
Odprti dostop
The full event interpretation is presented: a new exclusive tagging algorithm used by the high-energy physics experiment Belle II. The experimental setup of Belle II allows the precise measurement of ...otherwise inaccessible
B
meson decay modes. The Full Event Interpretation algorithm enables many of these measurements. The algorithm relies on machine learning to automatically identify plausible
B
meson decay chains based on the data recorded by the detector. Compared to similar algorithms employed by previous experiments, the Full Event Interpretation provides a greater efficiency, yielding a larger effective sample size usable in the measurement.
Fast integration using quasi-random numbers Bossert, J.; Feindt, M.; Kerzel, U.
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
04/2006, Letnik:
559, Številka:
1
Journal Article
Recenzirano
Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given
s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a ...higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples.