Experimental observations and advanced computer simulations in High Energy Physics (HEP) paved the way for the recent discoveries at the Large Hadron Collider (LHC) at CERN. Currently, Monte Carlo ...simulations account for a very significant amount of computational resources of the Worldwide LHC Computing Grid (WLCG). The current growth in available computing performance will not be enough to fulfill the expected demand for the forthcoming High Luminosity run (HL-LHC). More efficient simulation codes are therefore required.
This study focuses on evaluating the impact of different build methods on the simulation execution time. The Geant4 toolkit, the standard simulation code for the LHC experiments, consists of a set of libraries which can be either dynamically or statically linked to the simulation executable. Dynamic libraries are currently the preferred build method.
In this work, three versions of the GCC compiler, namely 4.8.5, 6.2.0 and 8.2.0 have been used. In addition, a comparison between four optimization levels (Os, O1, O2 and O3) has also been performed.
Static builds for all the GCC versions considered, exhibit a reduction in execution times of about 10%. Switching to newer GCC version results in an average of 30% improvement in the execution time regardless of the build type. In particular, a static build with GCC 8.2.0 leads to an improvement of about 34% with respect to the default configuration (GCC 4.8.5, dynamic, O2). The different GCC optimization flags do not affect the execution times.
Full detector simulation is known to consume a large proportion of computing resources available to the LHC experiments, and reducing time consumed by simulation will allow for more profound physics ...studies. There are many avenues to exploit, and in this work we investigate those that do not require changes in the GEANT4 simulation suite. In this study, several factors affecting the full GEANT4 simulation execution time are investigated. A broad range of configurations has been tested to ensure consistency of physical results. The effect of a single dynamic library GEANT4 build type has been investigated and the impact of different primary particles at different energies has been evaluated using GDML and GeoModel geometries. Some configurations have an impact on the physics results and are, therefore, excluded from further analysis. Usage of the single dynamic library is shown to increase execution time and does not represent a viable option for optimization. Lastly, the static build type is confirmed as the most effective method to reduce the simulation execution time.
The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the ...traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes. The NT1 runs ARC in the native Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC Cache are the most important features of ARC.
In this article we will describe the datastaging and cache functionality of the ARC-CE set up as an edge service to an HPC or cloud resource, and show the gain in efficiency this model provides compared to a traditional pilot model, especially for sites with remote storage.
CHEP 2018: Preface to the Proceedings Forti, Alessandra; Betev, Latchezar; Litmaath, Maarten ...
EPJ Web of Conferences,
2019, Letnik:
214
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
The 23
rd
International Conference on Computing in High Energy and Nuclear Physics (CHEP) took place in the National Palace of Culture, Sofia, Bulgaria from 9
th
to 13
th
of July 2018. 575 ...participants joined the plenary and the eight parallel sessions dedicated to: online computing; offline computing; distributed computing; data handling; software development; machine learning and physics analysis; clouds, virtualisation and containers; networks and facilities. The conference hosted 35 plenary presentations, 323 parallel presentations and 188 posters.
The increase in the scale of LHC computing during Run 3 and Run 4 (HL-LHC) will certainly require radical changes to the computing models and the data processing of the LHC experiments. The working ...group established by WLCG and the HEP Software Foundation to investigate all aspects of the cost of computing and how to optimise them has continued producing results and improving our understanding of this process. In particular, experiments have developed more sophisticated ways to calculate their resource needs, we have a much more detailed process to calculate infrastructure costs. This includes studies on the impact of HPC and GPU based resources on meeting the computing demands. We have also developed and perfected tools to quantitatively study the performance of experiments workloads and we are actively collaborating with other activities related to data access, benchmarking and technology cost evolution. In this contribution we expose our recent developments and results and outline the directions of future work.
The increase in the scale of LHC computing expected for Run 3 and even more so for Run 4 (HL-LHC) over the next ten years will certainly require radical changes to the computing models and the data ...processing of the LHC experiments. Translating the requirements of the physics programmes into computing resource needs is a complicated process and subject to significant uncertainties. For this reason, WLCG has established a working group to develop methodologies and tools intended tocharacterise the LHC workloads, better understand their interaction with the computing infrastructure, calculate their cost in terms of resources and expenditure and assist experiments, sites and the WLCG project in the evaluation of their future choices. This working group started in November 2017 and has about 30 active participants representing experiments and sites. In this contribution we expose the activities, the results achieved and the future directions.
ARC-CE: updates and plans Smirnova, Oxana; Kónya, Balázs; Cameron, David ...
Kompʹûternye issledovaniâ i modelirovanie (Online),
6/2015, Letnik:
7, Številka:
3
Journal Article
Recenzirano
Odprti dostop
ARC Compute Element is becoming more popular in WLCG and EGI infrastructures, being used not only in the Grid context, but also as an interface to HPC and Cloud resources. It strongly relies on ...community contributions, which helps keeping up with the changes in the distributed computing landscape. Future ARC plans are closely linked to the needs of the LHC computing, whichever shape it may take. There are also numerous examples of ARC usage for smaller research communities through national computing infrastructure projects in different countries. As such, ARC is a viable solution for building uniform distributed computing infrastructures using a variety of resources.
A time projection chamber with GEM-based readout Attié, David; Behnke, Ties; Bellerive, Alain ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
06/2017, Letnik:
856
Journal Article
Recenzirano
Odprti dostop
For the International Large Detector concept at the planned International Linear Collider, the use of time projection chambers (TPC) with micro-pattern gas detector readout as the main tracking ...detector is investigated. In this paper, results from a prototype TPC, placed in a 1T solenoidal field and read out with three independent Gas Electron Multiplier (GEM) based readout modules, are reported. The TPC was exposed to a 6GeV electron beam at the DESY II synchrotron. The efficiency for reconstructing hits, the measurement of the drift velocity, the space point resolution and the control of field inhomogeneities are presented.
As computational Grids move away from the prototyping state, reliability, performance and ease of use and maintenance become focus areas of their adoption. In this paper, we describe ARC (Advanced ...Resource Connector) Grid middleware, where these issues have been given special consideration.
We present an in-depth view of the existing components of ARC, and discuss some of the new components, functionalities and enhancements currently under development. This paper also describes architectural and technical choices that have been made to ensure scalability, stability and high performance. The core components of ARC have already been thoroughly tested in demanding production environments, where it has been in use since 2002. The main goal of this paper is to provide a first comprehensive description of ARC.