With the advent of workloads containing explicit requests for multiple cores in a single grid job, grid sites faced a new set of challenges in workload scheduling. The most common batch schedulers ...deployed at HEP computing sites do a poor job at multicore scheduling when using only the native capabilities of those schedulers. This paper describes how efficient multicore scheduling was achieved at the sites the authors represent, by implementing dynamically-sized multicore partitions via a minimalistic addition to the Torque/Maui batch system already in use at those sites. The paper further includes example results from use of the system in production, as well as measurements on the dependence of performance (especially the ramp-up in throughput for multicore jobs) on node size and job size.
The High-Resolution Spectrometers in Hall A at Jefferson Laboratory have been instrumented with state-of-the-art Vertical Drift Chambers designed and constructed by the Nuclear Interactions Group at ...MITLNS in conjunction with the Physics Division at Jefferson Lab. These chambers rely on a unique, high cell-density design made possible by the absence of field-shaping wires. Each chamber has an inherent per-plane resolution for 5-cell cosmic-ray tracks of 145
μm FWHM when operated on the bench at −4.8
kV with argon–isobutane gas, and 225
μm FWHM for 5-cell electron tracks when operated in the High-Resolution Spectrometer detector stack at −4.0
kV with argon–ethane gas. The design and construction facilitates wire placement and replacement to 50
μm, very low dark current, and no cross-talk. The detectors have been in almost continuous use since April 1996, providing reliable, high-resolution charged-particle tracking data for the Hall A physics program. A complete overview of this project is presented.
In preparation for the XENON1T Dark Matter data acquisition, we have prototyped and implemented a new computing model. The XENON signal and data processing software is developed fully in Python 3, ...and makes extensive use of generic scientific data analysis libraries, such as the SciPy stack. A certain tension between modern "Big Data" solutions and existing HEP frameworks is typically experienced in smaller particle physics experiments. ROOT is still the "standard" data format in our field, defined by large experiments (ATLAS, CMS). To ease the transition, our computing model caters to both analysis paradigms, leaving the choice of using ROOT-specific C++ libraries, or alternatively, Python and its data analytics tools, as a front-end choice of developing physics algorithms. We present our path on harmonizing these two ecosystems, which allowed us to use off-the-shelf software libraries (e.g., NumPy, SciPy, scikit-learn, matplotlib) and lower the cost of development and maintenance. To analyse the data, our software allows researchers to easily create "mini-trees"; small, tabular ROOT structures for Python analysis, which can be read directly into pandas DataFrame structures. One of our goals was making ROOT available as a cross-platform binary for an easy installation from the Anaconda Cloud (without going through the "dependency hell"). In addition to helping us discover dark matter interactions, lowering this barrier helps shift the particle physics toward non-domain-specific code.
We measured angular distributions of recoil-polarization response functions for neutral pion electroproduction for W = 1.23 GeV at Q(2) = 1.0 (GeV/c)(2), obtaining 14 separated response functions ...plus 2 Rosenbluth combinations; of these, 12 have been observed for the first time. Dynamical models do not describe quantities governed by imaginary parts of interference products well, indicating the need for adjusting magnitudes and phases for nonresonant amplitudes. We performed a nearly model-independent multipole analysis and obtained values for Re (S(1+)/M(1+)) = -(6.84 +/- 0.15)% and Re (E(1+)/M(1+)) = -(2.91 +/- 0.19)% that are distinctly different from those from the traditional Legendre analysis based upon M1+ dominance and ll(pi) < or = 1 truncation.
Analysis of empty ATLAS pilot jobs Love, P A; Alef, M; Dal Pra, S ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
9
Journal Article
Recenzirano
Odprti dostop
In this analysis we quantify the wallclock time used by short empty pilot jobs on a number of WLCG compute resources. Pilot factory logs and site batch logs are used to provide independent accounts ...of the usage. Results show a wide variation of wallclock time used by short jobs depending on the site and queue, and changing with time. For a reference dataset of all jobs in August 2016, the fraction of wallclock time used by empty jobs per studied site ranged from 0.1% to 0.8%. Aside from the wall time used by empty pilots, we also looked at how many pilots were empty as a fraction of all pilots sent. Binning the August dataset into days, empty fractions between 2% and 90% were observed. The higher fractions correlate well with periods of few actual payloads being sent to the site.
After the successful first run of the LHC, data taking is scheduled to restart in Summer 2015 with experimental conditions leading to increased data volumes and event complexity. In order to process ...the data generated in such scenario and exploit the multicore architectures of current CPUs, the LHC experiments have developed parallelized software for data reconstruction and simulation. However, a good fraction of their computing effort is still expected to be executed as single-core tasks. Therefore, jobs with diverse resources requirements will be distributed across the Worldwide LHC Computing Grid (WLCG), making workload scheduling a complex problem in itself. In response to this challenge, the WLCG Multicore Deployment Task Force has been created in order to coordinate the joint effort from experiments and WLCG sites. The main objective is to ensure the convergence of approaches from the different LHC Virtual Organizations (VOs) to make the best use of the shared resources in order to satisfy their new computing needs, minimizing any inefficiency originated from the scheduling mechanisms, and without imposing unnecessary complexities in the way sites manage their resources. This paper describes the activities and progress of the Task Force related to the aforementioned topics, including experiences from key sites on how to best use different batch system technologies, the evolution of workload submission tools by the experiments and the knowledge gained from scale tests of the different proposed job submission strategies.
We have studied the quasielastic He-3(e,e(')p)H-2 reaction in perpendicular coplanar kinematics, with the energy and the momentum transferred by the electron fixed at 840 MeV and 1502 MeV/c, ...respectively. The He-3(e,e(')p)H-2 cross section was measured for missing momenta up to 1000 MeV/c, while the A(TL) asymmetry was extracted for missing momenta up to 660 MeV/c. For missing momenta up to 150 MeV/c, the cross section is described by variational calculations using modern He-3 wave functions. For missing momenta from 150 to 750 MeV/c, strong final-state interaction effects are observed. Near 1000 MeV/c, the experimental cross section is more than an order of magnitude larger than predicted by available theories. The A(TL) asymmetry displays characteristic features of broken factorization with a structure that is similar to that generated by available models.