In the fall 2016, GeantV went through a thorough community evaluation of the project status and of its strategy for sharing the R&D results with the LHC experiments and with the HEP simulation ...community in general. Following this discussion, GeantV has engaged onto an ambitious 2-year road-path aiming to deliver a beta version that has most of the final design and several performance features of the final product, partially integrated with some of the experiment's frameworks. The initial GeantV prototype has been updated to a vector-aware concurrent framework, which is able to deliver high-density floating-point computations for most of the performance-critical components such as propagation in field and physics models. Electromagnetic physics models were adapted for the specific GeantV requirements, aiming for the full demonstration of shower physics performance in the alpha release at the end of 2017. We have revisited and formalized GeantV user interfaces and helper protocols, allowing to: connect to user code, provide recipes to access efficiently MC truth and generate user data in a concurrent environment.
The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and ...SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap ...between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.
GeantV Amadio, G.; Ananya, A.; Apostolakis, J. ...
Computing and software for big science,
12/2021, Letnik:
5, Številka:
1
Journal Article
Odprti dostop
Full detector simulation was among the largest CPU consumers in all CERN experiment software stacks for the first two runs of the Large Hadron Collider. In the early 2010s, it was projected that ...simulation demands would scale linearly with increasing luminosity, with only partial compensation from increasing computing resources. The extension of fast simulation approaches to cover more use cases that represent a larger fraction of the simulation budget is only part of the solution, because of intrinsic precision limitations. The remainder corresponds to speeding up the simulation software by several factors, which is not achievable by just applying simple optimizations to the current code base. In this context, the GeantV R&D project was launched, aiming to redesign the legacy particle transport code in order to benefit from features of fine-grained parallelism, including vectorization and increased locality of both instruction and data. This paper provides an extensive presentation of the results and achievements of this R&D project, as well as the conclusions and lessons learned from the beta version prototype.
GeantV is a complex system based on the interaction of different modules needed for detector simulation, which include transport of particles in fields, physics models simulating their interactions ...with matter and a geometrical modeler library for describing the detector and locating the particles and computing the path length to the current volume boundary. The GeantV project is recasting the classical simulation approach to get maximum benefit from SIMD/MIMD computational architectures and highly massive parallel systems. This involves finding the appropriate balance between several aspects influencing computational performance (floating-point performance, usage of off-chip memory bandwidth, specification of cache hierarchy, etc.) and handling a large number of program parameters that have to be optimized to achieve the best simulation throughput. This optimization task can be treated as a black-box optimization problem, which requires searching the optimum set of parameters using only point-wise function evaluations. The goal of this study is to provide a mechanism for optimizing complex systems (high energy physics particle transport simulations) with the help of genetic algorithms and evolution strategies as tuning procedures for massive parallel simulations. One of the described approaches is based on introducing a specific multivariate analysis operator that could be used in case of resource expensive or time consuming evaluations of fitness functions, in order to speed-up the convergence of the black-box optimization problem.
Performance of GeantV EM Physics Models Amadio, G; Ananya, A; Apostolakis, J ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
7
Journal Article
Recenzirano
Odprti dostop
The recent progress in parallel hardware architectures with deeper vector pipelines or many-cores technologies brings opportunities for HEP experiments to take advantage of SIMD and SIMT computing ...models. Launched in 2013, the GeantV project studies performance gains in propagating multiple particles in parallel, improving instruction throughput and data locality in HEP event simulation on modern parallel hardware architecture. Due to the complexity of geometry description and physics algorithms of a typical HEP application, performance analysis is indispensable in identifying factors limiting parallel execution. In this report, we will present design considerations and preliminary computing performance of GeantV physics models on coprocessors (Intel Xeon Phi and NVidia GPUs) as well as on mainstream CPUs.
Abstract
Background
The association of heart failure (HF) with the prognosis of atrial fibrillation (AF) remains unclear.
OBJECTIVES
To assess all-cause mortality in patients following ...hospitalization with comorbid AF in relation to the presence of HF.
Methods
We performed a cross-sectional analysis of data from 977 patients discharged from the cardiology ward of a single tertiary center between 2015 and 2018 and followed for a median of 2 years. The association between HF and the primary endpoint of death from any cause was assessed using multivariable Cox regression.
Results
HF was documented in 505 (51.7%) of AF cases at discharge, including HFrEF (17.9%), HFmrEF (16.5%) and HFpEF (25.2%). A primary endpoint event occurred in 212 patients (42%) in the AF-HF group and in 86 patients (18.2%) in the AF-no HF group (adjusted hazard ratio aHR 2.27; 95% confidence interval CI, 1.65 to 3.13; P<0.001). HF was associated with a higher risk of the composite secondary endpoint of death from any cause, AF or HF-specific hospitalization (aHR 1.69; 95% CI 1.32 to 2.16 p<0.001). The associations of HF with the primary and secondary endpoints were significant and similar for AF-HFrEF, AF-HFmrEF, AF-HFpEF.
Conclusions
HF was present in half of the patients discharged from the hospital with comorbid AF. The presence of HF on top of AF was independently associated with a significantly higher risk of all-cause mortality than did absence of HF, irrespective of HF subtype.
Funding Acknowledgement
Type of funding source: None
Abstract
Background
Oral anticoagulation (OAC) is paramount to effective thromboprophylaxis; yet adherence to OAC remains largely suboptimal in patients with atrial fibrillation (AF).
Purpose
We ...aimed to assess the impact of an educational, motivational intervention on the adherence to OAC in patients with non-valvular AF.
Methods
Hospitalised patients with non-valvular AF who received OAC were randomly assigned to usual medical care or a proactive intervention, comprising motivational interviewing and tailored counseling on medication adherence. The primary study outcome was adherence to OAC at 1-year, evaluated as Proportion of Days Covered (PDC) by OAC regimens and assessed through nationwide prescription registers. Secondary outcomes included the rate of persistence to OAC, gaps in treatment, proportion of VKA-takers with labile INR (defined as time to therapeutic range<70%) and clinical events.
Results
A total of 1009 patients were randomised, 500 in the intervention group and 509 in the control group. At 1-year follow-up, 77.2% (386/500) of patients in the intervention group had good adherence (PDC>80%), compared with 55% (280/509) in the control group (adjusted odds ratio 2.84, 95% confidence interval 2.14–3.75; p<0.001). Mean PDC±SD was 0.85±0.26 and 0.75±0.31, respectively (p<0.001). Patients that received the intervention were more likely to persist in their OAC therapy at 1 year, while usual medical care was associated with more major (≥3 months) treatment gaps Figure. Among 212 VKA-takers, patients in the intervention group were less likely to have labile INR compared with those in the control group 21/120 (17.1%) vs 34/92 (37.1%), OR 0.33 95% CI 1.15–0.72, p=0.005. Clinical events over a median follow-up period of 2 years occurred at a numerically lower, yet non-significant, rate in the intervention group Table.
Conclusions
In patients receiving OAC therapy for non-valvular AF, a motivational intervention significantly improved patterns of medication adherence, without significantly affecting clinical outcomes.
Primary and secondary outcomes
Funding Acknowledgement
Type of funding source: None