The PADME beam line Monte Carlo simulation Bossi, F.; Branchini, P.; Buonomo, B. ...
The journal of high energy physics,
09/2022, Letnik:
2022, Številka:
9
Journal Article
Recenzirano
Odprti dostop
A
bstract
The PADME experiment at the DAΦNE Beam-Test Facility (BTF) of the INFN Laboratory of Frascati is designed to search for invisible decays of dark sector particles produced in ...electron-positron annihilation events with a positron beam and a thin fixed target, by measuring the missing mass of single-photon final states. The presence of backgrounds originating from beam halo particles can significantly reduce the sensitivity of the experiment. To thoroughly understand the origin of the beam background contribution, a detailed G
eant
4-based Monte Carlo simulation has been developed, containing a full description of the detector together with the beam line and its optical elements. This simulation allows the full interactions of each particle to be described, both during beam line transport and during detection, a possibility which represents an innovative way to obtain reliable background predictions.
During the LHC Run-1, Grid resources in ATLAS have been managed by the PanDA and DQ2 systems. In order to meet the needs for the LHC Run-2, Prodsys2 and Rucio are used as the new ATLAS Workload and ...Data Management systems. The data are stored under various formats in ROOT files and end-user physicists have the choice to use either the ATHENA framework or directly ROOT. Within the ROOT data analysis framework it is possible to perform analysis of huge sets of ROOT files in parallel with PROOF on clusters of computers (usually organised in analysis facilities) or multi-core machines. In addition, PROOF-on-Demand (PoD) can be used to enable PROOF on top of an existing resource management system. In this work, we present the first performance obtained enabling PROOF-based analysis at CERN and in some of the Italian ATLAS Tier-2 sites within the new ATLAS workload system. Benchmark tests of data access with the httpd protocol, using also the httpd redirector, will be shown. We also present results on the startup latency tests using the new PROOF functionality of dynamic workers addition, which improves the performance of PoD using Grid resources. These new results will be compared with the expected improvements discussed in a previous work.
In the ATLAS computing model Grid resources are managed by PanDA, the system designed for production and distributed analysis, and data are stored under various formats in ROOT files. End-user ...physicists have the choice to use either the ATHENA framework or directly ROOT, that provides users the possibility to use PROOF to exploit the computing power of multi-core machines or to dynamically manage analysis facilities. Since analysis facilities are, in general, not dedicated to PROOF only, PROOF-on-Demand (PoD) is used to enable PROOF on top of an existing resource management system. In a previous work we investigated the usage of PoD to enable PROOF-based analysis on Tier-2 facilities using the PoD/gLite plug-in interface. In this paper we present the status of our investigations using the recently developed PoD/PanDA plug-in to enable PROOF and a real end-user ATLAS physics analysis as payload. For this work, data were accessed using two different protocols: XRootD and file protocol. The former in the site where the SRM interface is Disk Pool Manager (DPM) and the latter where the SRM interface is StoRM with GPFS file system. We will first describe the results of some benchmark tests we run on the ATLAS Italian Tier-1 and Tier-2s sites and at CERN. Then, we will compare the results of different types of analysis, comparing performances accessing data in relation to different types of SRM interfaces and accessing data with XRootD in the LAN and in the WAN using the ATLAS XROOTD storage federation infrastructure.
In 2012, 14 Italian institutions participating in LHC Experiments won a grant from the Italian Ministry of Research (MIUR), with the aim of optimising analysis activities, and in general the Tier2 ...Tier3 infrastructure. We report on the activities being researched upon, on the considerable improvement in the ease of access to resources by physicists, also those with no specific computing interests. We focused on items like distributed storage federations, access to batch-like facilities, provisioning of user interfaces on demand and cloud systems. R&D on next-generation databases, distributed analysis interfaces, and new computing architectures was also carried on. The project, ending in the first months of 2016, will produce a white paper with recommendations on best practices for data-analysis support by computing centers.
The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing ...workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.
ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages ...taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.
Large-size Resistive Micromegas have been chosen for the upgrade of the forward muon spectrometer of the ATLAS experiment, the New Small Wheel project. These chambers, together with small-strip Thin ...Gap Chambers (sTGC), allow reconstruction of high-momentum muon tracks in a high-radiation environment and provide a robust low-threshold single-muon trigger. A collaboration of seven INFN units built 32 SM1 type chambers, corresponding to one fourth of the total number needed for this upgrade. Each SM1 chamber has a surface of approximately 2 m2 and four sensitive layers. The production was shared among five INFN construction sites and it was completed in fall 2020. The construction methods, as well as the results of the quality tests done on components of the detector and on the assembled chambers, are reported in the present paper.