A prototype for a sampling calorimeter made out of cerium fluoride crystals interleaved with tungsten plates, and read out by wavelength-shifting fibres, has been exposed to beams of electrons with ...energies between 20 and 150GeV, produced by the CERN Super Proton Synchrotron accelerator complex. The performance of the prototype is presented and compared to that of a Geant4 simulation of the apparatus. Particular emphasis is given to the response uniformity across the channel front face, and to the prototype׳s energy resolution.
In the last two years the CMS experiment has commissioned a full end to end data quality monitoring system in tandem with progress in the detector commissioning. We present the data quality ...monitoring and certification systems in place, from online data taking to delivering certified data sets for physics analyses, release validation and offline re-reconstruction activities at Tier-1s. We discuss the main results and lessons learnt so far in the commissioning and early detector operation. We outline our practical operations arrangements and the key technical implementation aspects.
A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from ...the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.
Since 2003 the computing farm hosted by the INFN Tier3 facility in Trieste supports the activities of many scientific communities. Hundreds of jobs from 45 different VOs, including those of the LHC ...experiments, are processed simultaneously. Given that normally the requirements of the different computational communities are not synchronized, the probability that at any given time the resources owned by one of the participants are not fully utilized is quite high. A balanced compensation should in principle allocate the free resources to other users, but there are limits to this mechanism. In fact, the Trieste site may not hold the amount of data needed to attract enough analysis jobs, and even in that case there could be a lack of bandwidth for their access. The Trieste ALICE and CMS computing groups, in collaboration with other Italian groups, aim to overcome the limitations of existing solutions using two approaches: sharing the data among all the participants taking full advantage of GARR-X wide area networks (10 GB/s) and integrating the resources dedicated to batch analysis with the ones reserved for dynamic interactive analysis, through modern solutions as cloud computing.
An Xrootd Italian Federation Boccali, T; Donvito, G; Diacono, D ...
Journal of physics. Conference series,
01/2014, Letnik:
513, Številka:
4
Journal Article
Recenzirano
Odprti dostop
The Italian community in CMS has built a geographically distributed network in which all the data stored in the Italian region are available to all the users for their everyday work. This activity ...involves at different level all the CMS centers: the Tier1 at CNAF, all the four Tier2s (Bari, Rome, Legnaro and Pisa), and few Tier3s (Trieste, Perugia, Torino, Catania, Napoli, ...). The federation uses the new network connections as provided by GARR, our NREN (National Research and Education Network), which provides a minimum of 10 Gbit/s to all the sites via the GARR-X2 project. The federation is currently based on Xrootd1 technology, and on a Redirector aimed to seamlessly connect all the sites, giving the logical view of a single entity. A special configuration has been put in place for the Tier1, CNAF, where ad-hoc Xrootd changes have been implemented in order to protect the tape system from excessive stress, by not allowing WAN connections to access tape only files, on a file-by-file basis. In order to improve the overall performance while reading files, both in terms of bandwidth and latency, a hierarchy of xrootd redirectors has been implemented. The solution implemented provides a dedicated Redirector where all the INFN sites are registered, without considering their status (T1, T2, or T3 sites). An interesting use case were able to cover via the federation are disk-less Tier3s. The caching solution allows to operate a local storage with minimal human intervention: transfers are automatically done on a single file basis, and the cache is maintained operational by automatic removal of old files.
In 2012, 14 Italian Institutions participating LHC Experiments (10 in CMS) have won a grant from the Italian Ministry of Research (MIUR), to optimize Analysis activities and in general the ...Tier2/Tier3 infrastructure. A large range of activities is actively carried on: they cover data distribution over WAN, dynamic provisioning for both scheduled and interactive processing, design and development of tools for distributed data analysis, and tests on the porting of CMS software stack to new highly performing / low power architectures.
In 2012, 14 Italian institutions participating in LHC Experiments won a grant from the Italian Ministry of Research (MIUR), with the aim of optimising analysis activities, and in general the Tier2 ...Tier3 infrastructure. We report on the activities being researched upon, on the considerable improvement in the ease of access to resources by physicists, also those with no specific computing interests. We focused on items like distributed storage federations, access to batch-like facilities, provisioning of user interfaces on demand and cloud systems. R&D on next-generation databases, distributed analysis interfaces, and new computing architectures was also carried on. The project, ending in the first months of 2016, will produce a white paper with recommendations on best practices for data-analysis support by computing centers.
The High-Luminosity phase of the Large Hadron Collider at CERN (HL-LHC) poses stringent requirements on calorimeter performance in terms of resolution, pileup resilience and radiation hardness. A ...tungsten-CeF 3 sampling calorimeter is a possible option for the upgrade of current detectors. A prototype, read out with different types of wavelength-shifting fibers, has been built and exposed to high energy electrons, representative for the particle energy spectrum at HL-LHC, at the CERN SPS H4 beam line. This paper shows the performance of the prototype, mainly focussing on energy resolution and uniformity. A detailed simulation has been also developed in order to compare with data and to extrapolate to different configurations to be tested in future beam tests. Additional studies on the calorimeter and the R&D projects ongoing on the various components of the experimental setup will be also discussed.
Evidence for the light-by-light scattering process, γγ→γγ, in ultraperipheral PbPb collisions at a centre-of-mass energy per nucleon pair of 5.02TeV is reported. The analysis is conducted using a ...data sample corresponding to an integrated luminosity of 390μb−1 recorded by the CMS experiment at the LHC. Light-by-light scattering processes are selected in events with two photons exclusively produced, each with transverse energy ETγ>2GeV, pseudorapidity |ηγ|<2.4, diphoton invariant mass mγγ>5GeV, diphoton transverse momentum pTγγ<1GeV, and diphoton acoplanarity below 0.01. After all selection criteria are applied, 14 events are observed, compared to expectations of 9.0±0.9(theo) events for the signal and 4.0±1.2(stat) for the background processes. The excess observed in data relative to the background-only expectation corresponds to a significance of 3.7 standard deviations, and has properties consistent with those expected for the light-by-light scattering signal. The measured fiducial light-by-light scattering cross section, σfid(γγ→γγ)=120±46(stat)±28(syst)±12(theo)nb, is consistent with the standard model prediction. The mγγ distribution is used to set new exclusion limits on the production of pseudoscalar axion-like particles, via the ▪ process, in the mass range ▪.