The INFN scientific computing infrastructure is composed of more than 30 sites, ranging from CNAF (Tier-1 for LHC and main data center for nearly 30 other experiments) and nine LHC Tier-2s, to ∼ 20 ...smaller sites, including LHC Tier-3s and not-LHC experiment farms. A comprehensive review of the installed resources, together with plans for the near future, has been collected during the second half of 2017, and provides a general view of the infrastructure, its costs and its potential for expansions; it also shows the general trends in software and hardware solutions utilized in a complex reality as INFN. As of the end of 2017, the total installed CPU power exceeded 800 kHS06 (∼ 80,000 cores) while the total storage net capacity was over 57 PB on disk and 97 PB on tape: the vast majority of resources (95% of cores and 95% of storage) are concentrated in the 16 largest centers. Future evolutions are explored and are towards the consolidation into big centers; this has required a rethinking of the access policies and protocols in order to enable diverse scientific communities, beyond LHC, to fruitfully exploit the INFN resources. On top of that, such an infrastructure will be used beyond INFN experiments, and will be part of the Italian infrastructure, comprising other research institutes, universities and HPC centers.
In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and ...INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
This article documents the muon reconstruction and identification efficiency obtained by the ATLAS experiment for 139
fb
-
1
of
pp
collision data at
s
=
13
TeV collected between 2015 and 2018 during ...Run 2 of the LHC. The increased instantaneous luminosity delivered by the LHC over this period required a reoptimisation of the criteria for the identification of prompt muons. Improved and newly developed algorithms were deployed to preserve high muon identification efficiency with a low misidentification rate and good momentum resolution. The availability of large samples of
Z
→
μ
μ
and
J
/
ψ
→
μ
μ
decays, and the minimisation of systematic uncertainties, allows the efficiencies of criteria for muon identification, primary vertex association, and isolation to be measured with an accuracy at the per-mille level in the bulk of the phase space, and up to the percent level in complex kinematic configurations. Excellent performance is achieved over a range of transverse momenta from 3 GeV to several hundred GeV, and across the full muon detector acceptance of
|
η
|
<
2.7
.
A search for the dimuon decay of the Standard Model (SM) Higgs boson is performed using data corresponding to an integrated luminosity of 139 fb−1 collected with the ATLAS detector in Run 2 pp ...collisions at s=13 TeV at the Large Hadron Collider. The observed (expected) significance over the background-only hypothesis for a Higgs boson with a mass of 125.09 GeV is 2.0σ (1.7σ). The observed upper limit on the cross section times branching ratio for pp→H→μμ is 2.2 times the SM prediction at 95% confidence level, while the expected limit on a H→μμ signal assuming the absence (presence) of a SM signal is 1.1 (2.0). The best-fit value of the signal strength parameter, defined as the ratio of the observed signal yield to the one expected in the SM, is μ=1.2±0.6.
The observation of Higgs boson production in association with a top quark pair (tt¯H), based on the analysis of proton–proton collision data at a centre-of-mass energy of 13 TeV recorded with the ...ATLAS detector at the Large Hadron Collider, is presented. Using data corresponding to integrated luminosities of up to 79.8 fb−1, and considering Higgs boson decays into bb¯, WW⁎, τ+τ−, γγ, and ZZ⁎, the observed significance is 5.8 standard deviations, compared to an expectation of 4.9 standard deviations. Combined with the tt¯H searches using a dataset corresponding to integrated luminosities of 4.5 fb−1 at 7 TeV and 20.3 fb−1 at 8 TeV, the observed (expected) significance is 6.3 (5.1) standard deviations. Assuming Standard Model branching fractions, the total tt¯H production cross section at 13 TeV is measured to be 670 ± 90 (stat.) −100+110 (syst.) fb, in agreement with the Standard Model prediction.