Punzi-loss Abudinén, F.; Bertemes, M.; Bilokin, S. ...
The European physical journal. C, Particles and fields,
2022/2, Letnik:
82, Številka:
2
Journal Article
Recenzirano
Odprti dostop
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics ...experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Punzi-loss Abudinén, F; Bertemes, M; Bilokin, S ...
European physical journal. C, Particles and fields,
02/2022, Letnik:
82, Številka:
2
Journal Article
Recenzirano
Odprti dostop
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics ...experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
We measure the lifetime of the D_{s}^{+} meson using a data sample of 207 fb^{-1} collected by the Belle II experiment running at the SuperKEKB asymmetric-energy e^{+}e^{-} collider. The lifetime is ...determined by fitting the decay-time distribution of a sample of 116×10^{3} D_{s}^{+}→ϕπ^{+} decays. Our result is τ_{D_{s}^{+}}=(499.5±1.7±0.9) fs, where the first uncertainty is statistical and the second is systematic. This result is significantly more precise than previous measurements.
Measurement of the Λ_{c}^{+} Lifetime Ahmed, H; Ahn, J K; Aloisio, A ...
Physical review letters,
2023-Feb-17, 20230217, Letnik:
130, Številka:
7
Journal Article
Recenzirano
An absolute measurement of the Λ_{c}^{+} lifetime is reported using Λ_{c}^{+}→pK^{-}π^{+} decays in events reconstructed from data collected by the Belle II experiment at the SuperKEKB ...asymmetric-energy electron-positron collider. The total integrated luminosity of the data sample, which was collected at center-of-mass energies at or near the ϒ(4S) resonance, is 207.2 fb^{-1}. The result, τ(Λ_{c}^{+})=203.20±0.89±0.77 fs, where the first uncertainty is statistical and the second systematic, is the most precise measurement to date and is consistent with previous determinations.
We present a search for the baryon number $B$ and lepton number $L$ violating
decays $\tau^- \rightarrow \Lambda \pi^-$ and $\tau^- \rightarrow \bar{\Lambda}
\pi^-$ produced from the $e^+e^-\to ...\tau^+\tau^-$ process, using a 364
fb$^{-1}$ data sample collected by the Belle~II experiment at the SuperKEKB
collider. No evidence of signal is found in either decay mode, which have
$|\Delta(B-L)|$ equal to $2$ and $0$, respectively. Upper limits at 90\%
credibility level on the branching fractions of $\tau^- \rightarrow
\Lambda\pi^-$ and $\tau^- \rightarrow \bar{\Lambda}\pi^-$ are determined to be
$4.7 \times 10^{-8}$ and $4.3 \times 10^{-8}$, respectively.
Phys. Rev. Lett. 131, 171803 (2023) We measure the lifetime of the $D_s^+$ meson using a data sample of 207
fb$^{-1}$ collected by the Belle II experiment running at the SuperKEKB
asymmetric-energy ...$e^+ e^-$ collider. The lifetime is determined by fitting the
decay-time distribution of a sample of $116\times 10^3$
$D_s^+\rightarrow\phi\pi^+$ decays. Our result is $\tau^{}_{D^+_s} = (499.5\pm
1.7\pm 0.9)$ fs, where the first uncertainty is statistical and the second is
systematic. This result is significantly more precise than previous
measurements.
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics ...experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
We present the novel implementation of a non-differentiable metric approximation and a corresponding loss-scheduling aimed at the search for new particles of unknown mass in high energy physics ...experiments. We call the loss-scheduling, based on the minimisation of a figure-of-merit related function typical of particle physics, a Punzi-loss function, and the neural network that utilises this loss function a Punzi-net. We show that the Punzi-net outperforms standard multivariate analysis techniques and generalises well to mass hypotheses for which it was not trained. This is achieved by training a single classifier that provides a coherent and optimal classification of all signal hypotheses over the whole search space. Our result constitutes a complementary approach to fully differentiable analyses in particle physics. We implemented this work using PyTorch and provide users full access to a public repository containing all the codes and a training example.
Phys. Rev. Lett. 130, 071802 (2023) An absolute measurement of the $\Lambda^{+}_c$ lifetime is reported using
$\Lambda_c^+\rightarrow pK^-\pi^+$ decays in events reconstructed from data
collected by ...the Belle II experiment at the SuperKEKB asymmetric-energy
electron-positron collider. The total integrated luminosity of the data sample,
which was collected at center-of-mass energies at or near the $\Upsilon(4S)$
resonance, is $207.2~\mbox{fb}^{-1}$. The result, $\tau(\Lambda^{+}_c) = 203.20
\pm 0.89 \,\mathrm{(stat)} \pm 0.77 \,\mathrm{(syst)}$ fs, is the most precise
measurement to date and is consistent with previous determinations.
Eur. Phys. J. C 82, 283 (2022) We report on new flavor tagging algorithms developed to determine the
quark-flavor content of bottom ($B$) mesons at Belle II. The algorithms provide
essential inputs ...for measurements of quark-flavor mixing and charge-parity
violation. We validate and evaluate the performance of the algorithms using
hadronic $B$ decays with flavor-specific final states reconstructed in a data
set corresponding to an integrated luminosity of $62.8$ fb$^{-1}$, collected at
the $\Upsilon$(4$S$) resonance with the Belle II detector at the SuperKEKB
collider. We measure the total effective tagging efficiency to be
$\varepsilon_{\rm eff} = \big(30.0 \pm 1.2(\text{stat}) \pm
0.4(\text{syst})\big)\%$ for a category-based algorithm and $\varepsilon_{\rm
eff} = \big(28.8 \pm 1.2(\text{stat}) \pm 0.4(\text{syst})\big)\%$ for a
deep-learning-based algorithm.