We present the first application of three-dimensional convolutional Generative Adversarial Network to High Energy Physics simulation. We generate three-dimensional images of particles depositing ...energy in high granularity calorimeters. This is the first time such an approach is taken in HEP where most of data is three-dimensional in nature but it is customary to convert it into two-dimensional slices. The present work proves the success of using three dimensional convolutional GAN. Energy showers are well reproduced in all dimensions and show a good agreement with standard techniques (Geant4 detailed simulation). We also demonstrate the ability to condition training on several parameters such as particle type and energy. This work aims at proving that deep learning techniques represent a valid fast alternative to standard Monte Carlo approaches. It is part of the GeantV project.
Efficient random number generation with high quality statistical properties and exact reproducibility of Monte Carlo simulations are important requirements in many areas of computational science. ...VecRNG is a package providing pseudo-random number generation (pRNG) in the context of a new library VecMath. This library bundles up several general-purpose mathematical utilities, data structures, and algorithms having both SIMD and SIMT (GPUs) support based on VecCore. Several state-of-the-art RNG algorithms are implemented as kernels supporting parallel generation of random numbers in scalar, vector, and Cuda workflows. In this report, we will present design considerations, implementation details, and computing performance of parallel pRNG engines on both CPU and GPU. Reproducibility of propagating multiple particles in parallel for HEP event simulation is demonstrated, using GeantV-based examples, for both sequential and fine-grain track-level concurrent simulation workflows. Strategies for efficient uses of vectorized pRNG and non-overlapping streams of random number sequences in concurrent computing environments is discussed as well.
Abstract
Making general particle transport simulation for high-energy physics (HEP) single-instruction-multiple-thread (SIMT) friendly, to take advantage of accelerator hardware, is an important ...alternative for boosting the throughput of simulation applications. To date, this challenge is not yet resolved, due to difficulties in mapping the complexity of Geant4 components and workflow to the massive parallelism features exposed by graphics processing units (GPU). The AdePT project is one of the R&D initiatives tackling this limitation and exploring GPUs as potential accelerators for offloading some part of the CPU simulation workload. Our main target is to implement a complete electromagnetic shower demonstrator working on the GPU. The project is the first to create a full prototype of a realistic electron, positron, and gamma electromagnetic shower simulation on GPU, implemented as either a standalone application or as an extension of the standard Geant4 CPU workflow. Our prototype currently provides a platform to explore many optimisations and different approaches. We present the most recent results and initial conclusions of our work, using both a standalone GPU performance analysis and a first implementation of a hybrid workflow based on Geant4 on the CPU and AdePT on the GPU.
In the fall 2016, GeantV went through a thorough community evaluation of the project status and of its strategy for sharing the R&D results with the LHC experiments and with the HEP simulation ...community in general. Following this discussion, GeantV has engaged onto an ambitious 2-year road-path aiming to deliver a beta version that has most of the final design and several performance features of the final product, partially integrated with some of the experiment's frameworks. The initial GeantV prototype has been updated to a vector-aware concurrent framework, which is able to deliver high-density floating-point computations for most of the performance-critical components such as propagation in field and physics models. Electromagnetic physics models were adapted for the specific GeantV requirements, aiming for the full demonstration of shower physics performance in the alpha release at the end of 2017. We have revisited and formalized GeantV user interfaces and helper protocols, allowing to: connect to user code, provide recipes to access efficiently MC truth and generate user data in a concurrent environment.
Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote ...data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.
We have developed an interface within the ALICE analysis framework that allows transparent usage of the experiment's distributed resources. This analysis plug-in makes it possible to configure ...back-end specific parameters from a single interface and to run with no change the same custom user analysis in many computing environments, from local workstations to PROOF clusters or GRID resources. The tool is used now extensively in the ALICE collaboration for both end-user analysis and large scale productions.
Geant4 VMC 3.0 H ivná ová, I; Gheata, A
Journal of physics. Conference series,
01/2015, Letnik:
664, Številka:
7
Journal Article
Recenzirano
Odprti dostop
Virtual Monte Carlo (VMC) 1 provides an abstract interface into Monte Carlo transport codes. A user VMC based application, independent from the specific Monte Carlo codes, can be then run with any of ...the supported simulation programs. Developed by the ALICE Offline Project and further included in ROOT 2, the interface and implementations have reached stability during the last decade and have become a foundation for other detector simulation frameworks, the FAIR facility experiments framework being among the first and largest. Geant4 VMC 3, which provides the implementation of the VMC interface for Geant4 4, is in continuous maintenance and development, driven by the evolution of Geant4 on one side and requirements from users on the other side. Besides the implementation of the VMC interface, Geant4 VMC also provides a set of examples that demonstrate the use of VMC to new users and also serve for testing purposes. Since major release 2.0, it includes the G4Root navigator package, which implements an interface that allows one to run a Geant4 simulation using a ROOT geometry. The release of Geant4 version 10.00 with the integration of multithreading processing has triggered the development of the next major version of Geant4 VMC (version 3.0), which was released in November 2014. A beta version, available for user testing since March, has helped its consolidation and improvement. We will review the new capabilities introduced in this major version, in particular the integration of multithreading into the VMC design, its impact on the Geant4 VMC and G4Root packages, and the introduction of a new package, MTRoot, providing utility functions for ROOT parallel output in independent files with necessary additions for thread-safety. Migration of user applications to multithreading that preserves the ease of use of VMC will be also discussed. We will also report on the introduction of a new CMake 5 based build system, the migration to ROOT major release 6 and the improvement of the testing suites.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap ...between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.
GeantV Amadio, G.; Ananya, A.; Apostolakis, J. ...
Computing and software for big science,
12/2021, Letnik:
5, Številka:
1
Journal Article
Odprti dostop
Full detector simulation was among the largest CPU consumers in all CERN experiment software stacks for the first two runs of the Large Hadron Collider. In the early 2010s, it was projected that ...simulation demands would scale linearly with increasing luminosity, with only partial compensation from increasing computing resources. The extension of fast simulation approaches to cover more use cases that represent a larger fraction of the simulation budget is only part of the solution, because of intrinsic precision limitations. The remainder corresponds to speeding up the simulation software by several factors, which is not achievable by just applying simple optimizations to the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport code in order to benefit from features of fine-grained parallelism, including vectorization and increased locality of both instruction and data. This paper provides an extensive presentation of the results and achievements of this R&D project, as well as the conclusions and lessons learned from the beta version prototype.