A
bstract
The magnetic dipole polarizabilities of the vector
ρ
0
and
ρ
±
mesons in SU(3) pure gauge theory are calculated in the article. Based on this the authors explore the contribution of the ...dipole magnetic polarizabilities to the tensor polarization of the vector mesons in external abelian magnetic field. The tensor polarization leads to the dilepton asymmetry observed in non-central heavy ion collisions and can be also estimated in lattice gauge theory.
The decays of
→ J/ψΚ
+
Κ
−
π
+
π
−
are studied using a data set corresponding to an integrated luminosity of 9 fb
−1
collected by the LHCb experiment in proton-proton collisions between 2011 and ...2018. The decays
→
and
→χ
с1
(3872)Κ
+
Κ
−
where the Κ
+
Κ
−
pair does not originate from the φ meson are observed for the first time. Precise measurements of the branching fraction ratios between
→
,
→ χ
с1
(3872)φ,
→ ψ(2S)φ and
→ χ
с1
(3872)Κ
+
Κ
−
channels are reported. A structure denoted X(4740) is observed in the J/ψφ mass spectrum with a significance in excess of 5.3 standard deviations. In addition, the most precise measurement of the
meson mass is made.
The second generation of the ATLAS Production System called ProdSys2 is a distributed workload manager that runs daily hundreds of thousands of jobs, from dozens of different ATLAS specific ...workflows, across more than hundred heterogeneous sites. It achieves high utilization by combining dynamic job definition based on many criteria, such as input and output size, memory requirements and CPU consumption, with manageable scheduling policies and by supporting different kind of computational resources, such as GRID, clouds, supercomputers and volunteer-computers. The system dynamically assigns a group of jobs (task) to a group of geographically distributed computing resources. Dynamic assignment and resources utilization is one of the major features of the system, it didn't exist in the earliest versions of the production system where Grid resources topology was predefined using national or/and geographical pattern. Production System has a sophisticated job fault-recovery mechanism, which efficiently allows to run multi-Terabyte tasks without human intervention. We have implemented "train" model and open-ended production which allow to submit tasks automatically as soon as new set of data is available and to chain physics groups data processing and analysis with central production by the experiment. We present an overview of the ATLAS Production System and its major components features and architecture: task definition, web user interface and monitoring. We describe the important design decisions and lessons learned from an operational experience during the first year of LHC Run2. We also report the performance of the designed system and how various workflows, such as data (re)processing, Monte-Carlo and physics group production, users analysis, are scheduled and executed within one production system on heterogeneous computing resources.
Having information such as an estimation of the processing time or possibility of system outage (abnormal behaviour) helps to assist to monitor system performance and to predict its next state. The ...current cyber-infrastructure of the ATLAS Production System presents computing conditions in which contention for resources among high-priority data analyses happens routinely, that might lead to significant workload and data handling interruptions. The lack of the possibility to monitor and to predict the behaviour of the analysis process (its duration) and system's state itself provides motivation for a focus on design of the built-in situational awareness analytic tools.
The Hadron Calorimeter of LHCb (HCAL) is one of the four sub-detectors of the experiment calorimetric system, which also includes: Scintillator Pad Detector (SPD), Pre-Shower Detector (PS), and ...electromagnetic (ECAL) calorimeter. The main purpose of HCAL is to provide data for Level-0 trigger for selection events with high transverse energy hadrons. It is important to have a precise and reliable calibration system which produces result immediately after the calibration run. LHCb HCAL is equipped with a calibration system based on 137Cs radioactive source embedded into the calorimeter structure. It allows to obtain absolute calibration with good precision and monitor technical condition of the detector.
The Big Data processing needs of the ATLAS experiment grow continuously, as more data and more use cases emerge. For Big Data processing the ATLAS experiment adopted the data transformation approach, ...where software applications transform the input data into outputs. In the ATLAS production system, each data transformation is represented by a task, a collection of many jobs, submitted by the ATLAS workload management system (PanDA) and executed on the Grid. Our experience shows that the rate of task submission grows exponentially over the years. To scale up the ATLAS production system for new challenges, we started the ProdSys2 project. PanDA has been upgraded with the Job Execution and Definition Interface (JEDI). Patterns in ATLAS data transformation workflows composed of many tasks provided a scalable production system framework for template definitions of the many-tasks workflows. These workflows are being implemented in the Database Engine for Tasks (DEfT) that generates individual tasks for processing by JEDI. We report on the ATLAS experience with many-task workflow patterns in preparation for the LHC Run 2.
This document describes the design of the new Production System of the ATLAS experiment at the LHC 1. The Production System is the top level workflow manager which translates physicists' needs for ...production level processing and analysis into actual workflows executed across over a hundred Grid sites used globally by ATLAS. As the production workload increased in volume and complexity in recent years (the ATLAS production tasks count is above one million, with each task containing hundreds or thousands of jobs) there is a need to upgrade the Production System to meet the challenging requirements of the next LHC run while minimizing the operating costs. In the new design, the main subsystems are the Database Engine for Tasks (DEFT) and the Job Execution and Definition Interface (JEDI). Based on users' requests, DEFT manages inter-dependent groups of tasks (Meta-Tasks) and generates corresponding data processing workflows. The JEDI component then dynamically translates the task definitions from DEFT into actual workload jobs executed in the PanDA Workload Management System 2. We present the requirements, design parameters, basics of the object model and concrete solutions utilized in building the new Production System and its components.
The article presents numerical simulation of turbulent flow in a rotating rectangular 90° bend channel using the WMLES method, and the effect of rotation on the flow structure is studied. The article ...also presents a study of the accuracy of various semi-empirical turbulence models for closing the Reynolds equations for flows of this type by comparison with the WMLES results for the cases with and without rotation.
During three years of LHC data taking, the ATLAS collaboration completed three petascale data reprocessing campaigns on the Grid, with up to 2 PB of data being reprocessed every year. In reprocessing ...on the Grid, failures can occur for a variety of reasons, while Grid heterogeneity makes failures hard to diagnose and repair quickly. As a result, Big Data processing on the Grid must tolerate a continuous stream of failures, errors and faults. While ATLAS fault-tolerance mechanisms improve the reliability of Big Data processing in the Grid, their benefits come at costs and result in delays making the performance prediction difficult. Reliability Engineering provides a framework for fundamental understanding of the Big Data processing on the Grid, which is not a desirable enhancement but a necessary requirement. In ATLAS, cost monitoring and performance prediction became critical for the success of the reprocessing campaigns conducted in preparation for the major physics conferences. In addition, our Reliability Engineering approach supported continuous improvements in data reprocessing throughput during LHC data taking. The throughput doubled in 2011 vs. 2010 reprocessing, then quadrupled in 2012 vs. 2011 reprocessing. We present the Reliability Engineering analysis of ATLAS data reprocessing campaigns providing the foundation needed to scale up the Big Data processing technologies beyond the petascale.
We performed a measurement of differential and integral jet shapes in proton-carbon, proton-tungsten and proton-aluminium collisions at 920 GeV/c proton momentum with the HERA-B detector at HERA for ...the jet transverse energies in the range 4 GeV<ET(jet)< 12 GeV. Jets were identified using the kT-clustering algorithm. The measurements were performed for the hardest jet in the event, directed towards the opposite side with respect to the trigger direction. Jets become narrower with increasing transverse energy and measured distributions agree well with predictions of the PYTHIA 6.2 model. We do not observe any significant difference in the jet shape for the carbon and the aluminium targets. Nevertheless, the transverse energy flow at small and large radii for the tungsten sample is slightly less than for light nuclei. This observation indicates some influence of the nuclear environment on the formation of jets in heavy nuclei, especially at lower transverse energies, 5 GeV<ET(jet)< 6 GeV.