This article reports world averages of measurements of
b
-hadron,
c
-hadron, and
τ
-lepton properties obtained by the Heavy Flavor Averaging Group using results available through summer 2016. For the ...averaging, common input parameters used in the various analyses are adjusted (rescaled) to common values, and known correlations are taken into account. The averages include branching fractions, lifetimes, neutral meson mixing parameters,
C
P
violation parameters, parameters of semileptonic decays, and Cabbibo–Kobayashi–Maskawa matrix elements.
The next generation B factory experiment Belle II will collect huge data samples which are a challenge for the computing system. To cope with the high data volume and rate, Belle II is setting up a ...distributed computing system based on existing technologies and infrastructure plus Belle II specific extensions for workflow abstraction. The system was successfully tested in two production campaigns this year and valuable information for the further development was obtained.
We describe a new B-meson full reconstruction algorithm designed for the Belle experiment at the B-factory KEKB, an asymmetric e
+e
− collider that collected a data sample of 771.6×10
6
B
B
¯
pairs ...during its running time. To maximize the number of reconstructed B decay channels, it utilizes a hierarchical reconstruction procedure and probabilistic calculus instead of classical selection cuts. The multivariate analysis package NeuroBayes was used extensively to hold the balance between highest possible efficiency, robustness and acceptable consumption of CPU time.
In total, 1104 exclusive decay channels were reconstructed, employing 71 neural networks altogether. Overall, we correctly reconstruct one B
± or B
0 candidate in 0.28% or 0.18% of the
B
B
¯
events, respectively. Compared to the cut-based classical reconstruction algorithm used at the Belle experiment, this is an improvement in efficiency by roughly a factor of 2, depending on the analysis considered.
The new framework also features the ability to choose the desired purity or efficiency of the fully reconstructed sample freely. If the same purity as for the classical full reconstruction code is desired (
∼
25
%
), the efficiency is still larger by nearly a factor of 2. If, on the other hand, the efficiency is chosen at a similar level as the classical full reconstruction, the purity rises from
∼
25
%
to nearly 90%.
► A method for full reconstruction of B-mesons was built. ► A hierarchical model with multivariate techniques in each step was used. ► Instead of cuts, probabilities were calculated and feed to higher stages. ► Compared to cut-based methods we achieved a factor of 2 in efficiency.
Belle II Software Kuhr, T; Ritter, M
Journal of physics. Conference series,
10/2016, Volume:
762, Issue:
1
Journal Article
Peer reviewed
Open access
Belle II is a next generation B factory experiment that will collect 50 times more data than its predecessor, Belle. The higher luminosity at the SuperKEKB accelerator leads to higher background ...levels and requires a major upgrade of the detector. As a consequence, the simulation, reconstruction, and analysis software must also be upgraded substantially. Most of the software has been redesigned from scratch, taking into account the experience from Belle and other experiments and utilizing new technologies. The large amount of experimental and simulated data requires a high level of reliability and reproducibility, even in parallel environments. Several technologies, tools, and organizational measures are employed to evaluate and monitor the performance of the software during development.
The Belle II experiment at the SuperKEKB e+e− accelerator is preparing for taking first collision data next year. For the success of the experiment it is essential to have information about varying ...conditions available in the simulation, reconstruction, and analysis code. The interface to the conditions data in the client code was designed to make the life for developers as easy as possible. Two classes, one for single objects and one for arrays of objects, provide a type-safe access. Their interface resembles that of the classes for the access to event- level data with which the developers are already familiar. Changes of the referred conditions objects are usually transparent to the client code, but they can be checked for and functions or methods can be registered that are called back whenever a conditions data object is updated. The framework behind the interface fetches objects from the back-end database only when needed and caches them while they are valid. It can transparently handle validity ranges that are shorter than the finest granularity for the validity of payloads in the database. Besides an access to the central database the framework supports local conditions data storage which can be used as fallback solution or to overwrite values in the central database with custom ones.
Punzi-loss Abudinén, F.; Bertemes, M.; Bilokin, S. ...
The European physical journal. C, Particles and fields,
2022/2, Volume:
82, Issue:
2
Journal Article
Peer reviewed
Open access
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics ...experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
Punzi-loss Abudinén, F; Bertemes, M; Bilokin, S ...
European physical journal. C, Particles and fields,
02/2022, Volume:
82, Issue:
2
Journal Article
Peer reviewed
Open access
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics ...experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
Software Quality Control at Belle II Ritter, M; Kuhr, T; Hauth, T ...
Journal of physics. Conference series,
10/2017, Volume:
898, Issue:
7
Journal Article
Peer reviewed
Open access
Over the last seven years the software stack of the next generation B factory experiment Belle II has grown to over one million lines of C++ and Python code, counting only the part included in ...offline software releases. There are several thousand commits to the central repository by about 100 individual developers per year. To keep a coherent software stack of high quality that it can be sustained and used efficiently for data acquisition, simulation, reconstruction, and analysis over the lifetime of the Belle II experiment is a challenge. A set of tools is employed to monitor the quality of the software and provide fast feedback to the developers. They are integrated in a machinery that is controlled by a buildbot master and automates the quality checks. The tools include different compilers, cppcheck, the clang static analyzer, valgrind memcheck, doxygen, a geometry overlap checker, a check for missing or extra library links, unit tests, steering file level tests, a sophisticated high-level validation suite, and an issue tracker. The technological development infrastructure is complemented by organizational means to coordinate the development.
The Belle II detector at the Super KEKB e+e−collider plans to take first collision data in 2018. The monetary and CPU time costs associated with storing and processing the data mean that it is ...crucial for the detector components at Belle II to be calibrated quickly and accurately. A fast and accurate calibration system would allow the high level trigger to increase the efficiency of event selection, and can give users analysis-quality reconstruction promptly. A flexible framework to automate the fast production of calibration constants is being developed in the Belle II Analysis Software Framework (basf2). Detector experts only need to create two components from C++ base classes in order to use the automation system. The first collects data from Belle II event data files and outputs much smaller files to pass to the second component. This runs the main calibration algorithm to produce calibration constants ready for upload into the conditions database. A Python framework coordinates the input files, order of processing, and submission of jobs. Splitting the operation into collection and algorithm processing stages allows the framework to optionally parallelize the collection stage on a batch system.
Belle II Conditions Database Ritter, M; Wood, L; Kuhr, T ...
Journal of physics. Conference series,
09/2018, Volume:
1085, Issue:
3
Journal Article
Peer reviewed
Open access
The Belle II experiment at KEK is preparing for taking first collision data in early 2018. For the success of the experiment it is essential to have information about varying conditions available to ...systems worldwide in a fast and efficient manner that is straightforward for both the user and maintainer. The Belle II Conditions Database was designed to make maintenance as easy as possible. To this end, a HTTP REST service was developed with industry-standard tools such as Swagger for the API interface development, Payara for the Java EE application server, and the Hazelcast in-memory data grid for support of scalable caching as well as transparent distribution of the service across multiple sites. On the client side, the online and offline software has to be able to obtain conditions data from the Belle II Conditions Database in a robust and reliable way under very different situations. As such the client side interface to the Belle II Conditions Database has been designed with a variety of access mechanisms which allow the software to be used with and without an internet connection. Different methods to access the payload information are implemented to allow for a high level of customization per site and to simplify testing of new payloads locally. Changes to the conditions data are usually handled transparently but users can actively check whether an object has changed or register callback functions to be called whenever a conditions data object is updated. In addition a command line user interface has been developed to simplify inspection and modification of the database contents.