The INFN scientific computing infrastructure is composed of more than 30 sites, ranging from CNAF (Tier-1 for LHC and main data center for nearly 30 other experiments) and nine LHC Tier-2s, to ∼ 20 ...smaller sites, including LHC Tier-3s and not-LHC experiment farms. A comprehensive review of the installed resources, together with plans for the near future, has been collected during the second half of 2017, and provides a general view of the infrastructure, its costs and its potential for expansions; it also shows the general trends in software and hardware solutions utilized in a complex reality as INFN. As of the end of 2017, the total installed CPU power exceeded 800 kHS06 (∼ 80,000 cores) while the total storage net capacity was over 57 PB on disk and 97 PB on tape: the vast majority of resources (95% of cores and 95% of storage) are concentrated in the 16 largest centers. Future evolutions are explored and are towards the consolidation into big centers; this has required a rethinking of the access policies and protocols in order to enable diverse scientific communities, beyond LHC, to fruitfully exploit the INFN resources. On top of that, such an infrastructure will be used beyond INFN experiments, and will be part of the Italian infrastructure, comprising other research institutes, universities and HPC centers.
This article documents the muon reconstruction and identification efficiency obtained by the ATLAS experiment for 139
fb
-
1
of
pp
collision data at
s
=
13
TeV collected between 2015 and 2018 during ...Run 2 of the LHC. The increased instantaneous luminosity delivered by the LHC over this period required a reoptimisation of the criteria for the identification of prompt muons. Improved and newly developed algorithms were deployed to preserve high muon identification efficiency with a low misidentification rate and good momentum resolution. The availability of large samples of
Z
→
μ
μ
and
J
/
ψ
→
μ
μ
decays, and the minimisation of systematic uncertainties, allows the efficiencies of criteria for muon identification, primary vertex association, and isolation to be measured with an accuracy at the per-mille level in the bulk of the phase space, and up to the percent level in complex kinematic configurations. Excellent performance is achieved over a range of transverse momenta from 3 GeV to several hundred GeV, and across the full muon detector acceptance of
|
η
|
<
2.7
.
In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and ...INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
Jet substructure observables have significantly extended the search program for physics beyond the standard model at the Large Hadron Collider. The state-of-the-art tools have been motivated by ...theoretical calculations, but there has never been a direct comparison between data and calculations of jet substructure observables that are accurate beyond leading-logarithm approximation. Such observables are significant not only for probing the collinear regime of QCD that is largely unexplored at a hadron collider, but also for improving the understanding of jet substructure properties that are used in many studies at the Large Hadron Collider. This Letter documents a measurement of the first jet substructure quantity at a hadron collider to be calculated at next-to-next-to-leading-logarithm accuracy. The normalized, differential cross section is measured as a function of log10ρ2, where ρ is the ratio of the soft-drop mass to the ungroomed jet transverse momentum. This quantity is measured in dijet events from 32.9 fb−1 of s=13 TeV proton-proton collisions recorded by the ATLAS detector. The data are unfolded to correct for detector effects and compared to precise QCD calculations and leading-logarithm particle-level Monte Carlo simulations.
Jet energy scale and resolution measurements with their associated uncertainties are reported for jets using 36–81 fb
-
1
of proton–proton collision data with a centre-of-mass energy of
s
=
13
TeV
...collected by the ATLAS detector at the LHC. Jets are reconstructed using two different input types: topo-clusters formed from energy deposits in calorimeter cells, as well as an algorithmic combination of charged-particle tracks with those topo-clusters, referred to as the ATLAS particle-flow reconstruction method. The anti-
k
t
jet algorithm with radius parameter
R
=
0.4
is the primary jet definition used for both jet types. This result presents new jet energy scale and resolution measurements in the high pile-up conditions of late LHC Run 2 as well as a full calibration of particle-flow jets in ATLAS. Jets are initially calibrated using a sequence of simulation-based corrections. Next, several in situ techniques are employed to correct for differences between data and simulation and to measure the resolution of jets. The systematic uncertainties in the jet energy scale for central jets (
|
η
|
<
1.2
) vary from 1% for a wide range of high-
p
T
jets (
250
<
p
T
<
2000
GeV
), to 5% at very low
p
T
(
20
GeV
) and 3.5% at very high
p
T
(
>
2.5
TeV
). The relative jet energy resolution is measured and ranges from (
24
±
1.5
)% at 20
GeV
to (
6
±
0.5
)% at 300
GeV
.