Abstract
Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. ...Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns.
Purpose
To evaluate the prevalence, risk factors and evolution of diabetes mellitus (DM) after targeted treatment in patients with primary aldosteronism (PA).
Methods
A retrospective multicenter ...study of PA patients in follow-up at 27 Spanish tertiary hospitals (SPAIN-ALDO Register).
Results
Overall, 646 patients with PA were included. At diagnosis, 21.2% (n = 137) had DM and 67% of them had HbA1c levels < 7%. In multivariate analysis, family history of DM (OR 4.00 1.68–9.53), the coexistence of dyslipidemia (OR 3.57 1.51–8.43) and advanced age (OR 1.04 per year of increase 1.00–1.09) were identified as independent predictive factors of DM. Diabetic patients were on beta blockers (46.7% (n = 64) vs. 27.5% (n = 140), P < 0.001) and diuretics (51.1% (n = 70) vs. 33.2% (n = 169), p < 0.001) more frequently than non-diabetics. After a median follow-up of 22 months IQR 7.5–63.0, 6.9% of patients developed DM, with no difference between those undergoing adrenalectomy and those treated medically (HR 1.07 0.49–2.36, p = 0.866). There was also no significant difference in the evolution of glycemic control between DM patients who underwent surgery and those medically treated (p > 0.05).
Conclusion
DM affects about one quarter of patients with PA and the risk factors for its development are common to those of the general population. Medical and surgical treatment provides similar benefit in glycemic control in patients with PA and DM.
The CMS experiment is working to integrate an increasing number of High Performance Computing (HPC) resources into its distributed computing infrastructure. The case of the Barcelona Supercomputing ...Center (BSC) is particularly challenging as severe network restrictions prevent the use of CMS standard computing solutions. The CIEMAT CMS group has performed significant work in order to overcome these constraints and make BSC resources available to CMS. The developments include adapting the workload management tools, replicating the CMS software repository to BSC storage, providing an alternative access to detector conditions data, and setting up a service to transfer produced output data to a nearby storage facility. In this work, we discuss the current status of this integration activity and present recent developments, such as a front-end service to improve slot usage efficiency and an enhanced transfer service that supports the staging of input data for workflows at BSC. Moreover, significant efforts have been devoted to improving the scalability of the deployed solution, automating its operation, and simplifying the matchmaking of CMS workflows that are suitable for execution at BSC.
CMS is tackling the exploitation of CPU resources at HPC centers where compute nodes do not have network connectivity to the Internet. Pilot agents and payload jobs need to interact with external ...services from the compute nodes: access to the application software (CernVM-FS) and conditions data (Frontier), management of input and output data files (data management services), and job management (HTCondor). Finding an alternative route to these services is challenging. Seamless integration in the CMS production system without causing any operational overhead is a key goal. The case of the Barcelona Supercomputing Center (BSC), in Spain, is particularly challenging, due to its especially restrictive network setup. We describe in this paper the solutions developed within CMS to overcome these restrictions, and integrate this resource in production. Singularity containers with application software releases are built and pre-placed in the HPC facility shared file system, together with conditions data files. HTCondor has been extended to relay communications between running pilot jobs and HTCondor daemons through the HPC shared file system. This operation mode also allows piping input and output data files through the HPC file system. Results, issues encountered during the integration process, and remaining concerns are discussed.
The Spanish CMS Analysis Facility at CIEMAT Cárdenas-Montes, M.; Delgado Peris, A.; Flix, J. ...
EPJ Web of Conferences,
2024, Letnik:
295
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
The increasingly larger data volumes that the LHC experiments will accumulate in the coming years, especially in the High-Luminosity LHC era, call for a paradigm shift in the way experimental ...datasets are accessed and analyzed. The current model, based on data reduction on the Grid infrastructure, followed by interactive data analysis of manageable size samples on the physicists’ individual computers, will be superseded by the adoption of Analysis Facilities. This rapidly evolving concept is converging to include dedicated hardware infrastructures and computing services optimized for the effective analysis of large HEP data samples. This paper describes the actual implementation of this new analysis facility model at the CIEMAT institute, in Spain, to support the local CMS experiment community. Our work details the deployment of dedicated highly performant hardware, the operation of data staging and caching services ensuring prompt and efficient access to CMS physics analysis datasets, and the integration and optimization of a custom analysis framework based on ROOT’s RDataFrame and CMS NanoAOD format. Finally, performance results obtained by benchmarking the deployed infrastructure and software against a CMS analysis workflow are summarized.
In 2029 the LHC will start the high-luminosity LHC program, with a boost in the integrated luminosity resulting in an unprecedented amount of ex- perimental and simulated data samples to be ...transferred, processed and stored in disk and tape systems across the worldwide LHC computing Grid. Content de- livery network solutions are being explored with the purposes of improving the performance of the compute tasks reading input data via the wide area network, and also to provide a mechanism for cost-effective deployment of lightweight storage systems supporting traditional or opportunistic compute resources. In this contribution we study the benefits of applying cache solutions for the CMS experiment, in particular the configuration and deployment of XCache serving data to two Spanish WLCG sites supporting CMS: the Tier-1 site at PIC and the Tier-2 site at CIEMAT. The deployment and configuration of the system and the developed monitoring tools will be shown, as well as data popularity studies in relation to the optimization of the cache configuration, the effects on CPU efficiency improvements for analysis tasks, and the cost benefits and impact of including this solution in the region.
Lightweight site federation for CMS support Acosta-Silva, C.; Delgado Peris, A.; Flix, J. ...
EPJ Web of Conferences,
01/2020, Letnik:
245
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
There is a general trend in WLCG towards the federation of resources, aiming for increased simplicity, efficiency, flexibility, and availability. Although general VO-agnostic federation of resources ...between two independent and autonomous resource centres may prove arduous, a considerable amount of flexibility in resource sharing can be achieved in the context of a single WLCG VO, with a relatively simple approach. We have demonstrated this for PIC and CIEMAT, the Spanish Tier-1 and Tier-2 sites for CMS, by making use of the existing CMS xrootd federation infrastructure and profiting from the common CE/batch technology used by the two centres. This work describes how compute slots are shared between the two sites, so that the capacity of one site can be dynamically increased with idle execution slots from the remote site, and how data can be efficiently accessed irrespective of its location. Our contribution includes measurements for diverse CMS workflows comparing performances between local and remote execution, and can also be regarded as a benchmark to explore future potential scenarios, where storage resources would be concentrated in a reduced number of sites.
New sets of CMS underlying-event parameters (“tunes”) are presented for the
pythia
8 event generator. These tunes use the NNPDF3.1 parton distribution functions (PDFs) at leading (LO), ...next-to-leading (NLO), or next-to-next-to-leading (NNLO) orders in perturbative quantum chromodynamics, and the strong coupling evolution at LO or NLO. Measurements of charged-particle multiplicity and transverse momentum densities at various hadron collision energies are fit simultaneously to determine the parameters of the tunes. Comparisons of the predictions of the new tunes are provided for observables sensitive to the event shapes at LEP, global underlying event, soft multiparton interactions, and double-parton scattering contributions. In addition, comparisons are made for observables measured in various specific processes, such as multijet, Drell–Yan, and top quark-antiquark pair production including jet substructure observables. The simulation of the underlying event provided by the new tunes is interfaced to a higher-order matrix-element calculation. For the first time, predictions from
pythia
8 obtained with tunes based on NLO or NNLO PDFs are shown to reliably describe minimum-bias and underlying-event data with a similar level of agreement to predictions from tunes using LO PDF sets.
The measurement of the luminosity recorded by the CMS detector installed at LHC interaction point 5, using proton-proton collisions at
in 2015 and 2016, is reported. The absolute luminosity scale is ...measured for individual bunch crossings using beam-separation scans (the van der Meer method), with a relative precision of 1.3 and 1.0% in 2015 and 2016, respectively. The dominant sources of uncertainty are related to residual differences between the measured beam positions and the ones provided by the operational settings of the LHC magnets, the factorizability of the proton bunch spatial density functions in the coordinates transverse to the beam direction, and the modeling of the effect of electromagnetic interactions among protons in the colliding bunches. When applying the van der Meer calibration to the entire run periods, the integrated luminosities when CMS was fully operational are 2.27 and 36.3
in 2015 and 2016, with a relative precision of 1.6 and 1.2%, respectively. These are among the most precise luminosity measurements at bunched-beam hadron colliders.
A
bstract
A search for the standard model Higgs boson produced in association with a top-quark pair
t
t
¯
H
is presented, using data samples corresponding to integrated luminosities of up to 5.1 fb
...−1
and 19.7 fb
−1
collected in pp collisions at center-of-mass energies of 7 TeV and 8 TeV respectively. The search is based on the following signatures of the Higgs boson decay: H → hadrons, H → photons, and H → leptons. The results are characterized by an observed
t
t
¯
H
signal strength relative to the standard model cross section,
μ
=
σ/σ
SM
,under the assumption that the Higgs boson decays as expected in the standard model. The best fit value is
μ
= 2.8 ± 1.0 for a Higgs boson mass of 125.6 GeV.