Utilizing clouds for Belle II Sobie, R.J.
Journal of physics. Conference series,
12/2015, Volume:
664, Issue:
2
Journal Article
Peer reviewed
Open access
This paper describes the use of cloud computing resources for the Belle II experiment. A number of different methods are used to exploit the private and opportunistic clouds. Clouds are making ...significant contributions to the generation of Belle II MC data samples and it is expected that their impact will continue to grow over the coming years.
The use of opportunistic cloud resources by HEP experiments has significantly increased over the past few years. Clouds that are owned or managed by the HEP community are connected to the LHCONE ...network or the research network with global access to HEP computing resources. Private clouds, such as those supported by non-HEP research funds are generally connected to the international research network; however, commercial clouds are either not connected to the research network or only connect to research sites within their national boundaries. Since research network connectivity is a requirement for HEP applications, we need to find a solution that provides a high-speed connection. We are studying a solution with a virtual router that will address the use case when a commercial cloud has research network connectivity in a limited region. In this situation, we host a virtual router in our HEP site and require that all traffic from the commercial site transit through the virtual router. Although this may increase the network path and also the load on the HEP site, it is a workable solution that would enable the use of the remote cloud for low I/O applications. We are exploring some simple open-source solutions. In this paper, we present the results of our studies and how it will benefit our use of private and public clouds for HEP computing.
The distributed cloud using the CloudScheduler VM provisioning service is one of the longest running systems for HEP workloads. It has run millions of jobs for ATLAS and Belle II over the past few ...years using private and commercial clouds around the world. Our goal is to scale the distributed cloud to the 10,000-core level, with the ability to run any type of application (low I/O, high I/O and high memory) on any cloud. To achieve this goal, we have been implementing changes that utilize context-aware computing designs that are currently employed in the mobile communication industry. Context-awareness makes use of real-time and archived data to respond to user or system requirements. In our distributed cloud, we have many opportunistic clouds with no local HEP services, software or storage repositories. A context-aware design significantly improves the reliability and performance of our system by locating the nearest location of the required services. We describe how we are collecting and managing contextual information from our workload management systems, the clouds, the virtual machines and our services. This information is used not only to monitor the system but also to carry out automated corrective actions. We are incrementally adding new alerting and response services to our distributed cloud. This will enable us to scale the number of clouds and virtual machines. Further, a context-aware design will enable us to run analysis or high I/O application on opportunistic clouds. We envisage an open-source HTTP data federation (for example, the DynaFed system at CERN) as a service that would provide us access to existing storage elements used by the HEP experiments.
We present a search for nine lepton-number-violating and three lepton-flavor-violating neutral charm decays of the type $D^0 → h'^-h^-ℓ'^+ℓ^+$ and $D^0 → h'^-h^+ℓ'^±ℓ^∓$, where $h$ and $h'$ represent ...a $K$ or $π$ meson and $ℓ$ and $ℓ '$ an electron or muon. The analysis is based on $468 fb^{-1}$ of $e^+e^-$ annihilation data collected at or close to the $Υ(4S)$ resonance with the BABAR detector at the SLAC National Accelerator Laboratory. No significant signal is observed for any of the twelve modes, and we establish 90% confidence level upper limits on the branching fractions in the range $(1.0 – 30.6) × 10^{-7}$. The limits are between 1 and 3 orders of magnitude more stringent than previous measurements.
Full text
Available for:
CMK, CTK, FMFMET, NUK, UL
We report the observation of the rare charm decay D0 → K-π+e+e- , based on 468 fb-1 of e+e- annihilation data collected at or close to the center-of-mass energy of the γ ( 4 S ) resonance with the ...BABAR detector at the SLAC National Accelerator Laboratory. We find the branching fraction in the invariant mass range 0.675<m ( e+e- ) <0.875 GeV / c2 of the electron-positron pair to be B ( D0 → K-π+e+e- ) = ( 4.0±0.5±0.2±0.1 ) ×10-6 , where the first uncertainty is statistical, the second systematic, and the third due to the uncertainty in the branching fraction of the decay D0 → K-π+π+π- used as a normalization mode. The significance of the observation corresponds to 9.7 standard deviations including systematic uncertainties. This result is consistent with the recently reported D0 → K-π+μ+μ- branching fraction, measured in the same invariant mass range, and with the value expected in the standard model. In a set of regions of m ( e+e- ) , where long-distance effects are potentially small, we determine a 90% confidence level upper limit on the branching fraction B ( D0 → K-π+e+e- ) <3.1×10-6 .
Full text
Available for:
CMK, CTK, FMFMET, NUK, UL
An angular analysis of the decay $\bar{B}$ → D*ℓ ¯$\bar{ν}_ℓ$, ℓϵ {e,μ}, is reported using the full e+e- collision data set collected by the BABAR experiment at the Υ(4S) resonance. One B meson from ...the Υ(4S) → $B\bar{B}$ decay is fully reconstructed in a hadronic decay mode, which constrains the kinematics and provides a determination of the neutrino momentum vector. The kinematics of the semileptonic decay is described by the dilepton mass squared, q2, and three angles. The first unbinned fit to the full four-dimensional decay rate in the standard model is performed in the so-called Boyd-Grinstein-Lebed approach, which employs a generic q2 parametrization of the underlying form factors based on crossing symmetry, analyticity, and QCD dispersion relations for the amplitudes. A fit using the more model-dependent Caprini-Lellouch-Neubert (CLN) approach is performed as well. Our form factor shapes show deviations from previous fits based on the CLN parametrization. The latest form factors also provide an updated prediction for the branching fraction ratio $\mathscr{R}$(D*) ≡ $\mathscr{B}$($\bar{B}$ → D* τ¯ $\bar{ν}_τ$)/$\mathscr{B}$($\bar{B}$ → D*ℓ¯ $\bar{ν}_ℓ$) = 0.253 ± 0.005 . Finally, using the well-measured branching fraction for the $\bar{B}$ → D*ℓ¯ $\bar{ν}_ℓ$ decay, a value of |Vcb| = (38.36 ± 0.90) × 10⁻3 is obtained that is consistent with the current world average for exclusive $\bar{B}$ → D (*)ℓ⁻$\bar{ν}_ℓ$ decays and remains in tension with the determination from inclusive semileptonic B decays to final states with charm.
Full text
Available for:
CMK, CTK, FMFMET, NUK, UL
Quantile mapping bias correction algorithms are commonly used to correct systematic distributional biases in precipitation outputs from climate models. Although they are effective at removing ...historical biases relative to observations, it has been found that quantile mapping can artificially corrupt future model-projected trends. Previous studies on the modification of precipitation trends by quantile mapping have focused on mean quantities, with less attention paid to extremes. This article investigates the extent to which quantile mapping algorithms modify global climate model (GCM) trends in mean precipitation and precipitation extremes indices. First, a bias correction algorithm, quantile delta mapping (QDM), that explicitly preserves relative changes in precipitation quantiles is presented. QDM is compared on synthetic data with detrended quantile mapping (DQM), which is designed to preserve trends in the mean, and with standard quantile mapping (QM). Next, methods are applied to phase 5 of the Coupled Model Intercomparison Project (CMIP5) daily precipitation projections over Canada. Performance is assessed based on precipitation extremes indices and results from a generalized extreme value analysis applied to annual precipitation maxima. QM can inflate the magnitude of relative trends in precipitation extremes with respect to the raw GCM, often substantially, as compared to DQM and especially QDM. The degree of corruption in the GCM trends by QM is particularly large for changes in long period return values. By the 2080s, relative changes in excess of +500% with respect to historical conditions are noted at some locations for 20-yr return values, with maximum changes by DQM and QDM nearing +240% and +140%, respectively, whereas raw GCM changes are never projected to exceed +120%.