Performance of CMS ECAL with first LHC data Franzoni, G.
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
02/2011, Letnik:
628, Številka:
1
Journal Article
Recenzirano
Odprti dostop
In the Compact Muon Solenoid (CMS) experiment at the CERN Large Hadron Collider (LHC), the high resolution Electromagnetic Calorimeter (ECAL), consisting of 75
848 lead tungstate crystals and a ...silicon/lead preshower, will play a crucial role in the physics program. In preparation for the data taking a detailed procedure was followed to commission the ECAL readout and trigger, and to pre-calibrate, with test beam and cosmic ray data, each channel of the calorimeter to a precision of 2% or less in the central region. The first LHC collisions have been used to complete the detector commissioning and will have to provide the first
in situ calibration. In this talk the CMS ECAL status and performance with the first collisions delivered from LHC will be reviewed.
The CMS detector at the LHC is ready to take physics data. The high resolution electromagnetic calorimeter, which consists of 75
848 lead tungstate crystals, will play a crucial role in the physics ...searches undertaken by the CMS. The design and the status of the calorimeter will be presented, and its performance in tests with beams, cosmic rays, and data from the first LHC beams will be reviewed.
During the first 3 years of operations at the Large Hadron Collider, the Compact Muon Solenoid detector has collected data across vastly evolving conditions for the center of mass energy, the ...instantaneous luminosity and the events' pile up. The CMS collaboration has followed this evolution in a continuous way providing high-quality and prompt data reconstruction, necessary for achieving excellent physics performance requested by such high energy physics experiment. The scientific success of CMS came from keeping a constant attention on the key areas: a careful preparation and maintenance of the reconstruction algorithms and core software infrastructure, efficient and robust strategies and algorithms for the calibration and the alignment of the diverse detector elements, up to a continuous and meticulous scrutiny of the data quality and validation of any software infrastructure and detector calibration changes which was deemed necessary. This contribution covers the major development and operational aspects of the CMS offline workflows during data taking experience of 2010-2013, underlying their essential role towards the main physics achievements and discoveries of the CMS experiment during the last years.
Physics analysis at the Compact Muon Solenoid requires both the production of simulated events and processing of the data collected by the experiment. Since the end of the LHC Run-I in 2012, CMS has ...produced over 20 billion simulated events, from 75 thousand processing requests organised in one hundred different campaigns. These campaigns emulate different configurations of collision events, the detector, and LHC running conditions. In the same time span, sixteen data processing campaigns have taken place to reconstruct different portions of the Run-I and Run-II data with ever improving algorithms and calibrations. The scale and complexity of the events simulation and processing, and the requirement that multiple campaigns must proceed in parallel, demand that a comprehensive, frequently updated and easily accessible monitoring be made available. The monitoring must serve both the analysts, who want to know which and when datasets will become available, and the central production teams in charge of submitting, prioritizing, and running the requests across the distributed computing infrastructure. The Production Monitoring Platform (pMp) web-based service, has been developed in 2015 to address those needs. It aggregates information from multiple services used to define, organize, and run the processing requests. Information is updated hourly using a dedicated elastic database and the monitoring provides multiple configurable views to assess the status of single datasets as well as entire production campaigns. This contribution will describe the pMp development, the evolution of its functionalities, and one and half year of operational experience.
A wide range of detector commissioning, calibration and data analysis tasks is carried out by CMS using dedicated storage resources available at the CMS CERN Tier-2 centre. Relying on the ...functionalities of the EOS disk-only storage technology, the optimal exploitation of the CMS user group resources has required the introduction of policies for data access management, data protection, cleanup campaigns based on access pattern, and long term tape archival. The resource management has been organised around the definition of working groups and the delegation to an identified responsible of each group composition. In this paper we illustrate the user group storage management, and the development and operational experience at the CMS CERN Tier-2 centre in the 2012-2015 period.
The exploitation of the full physics potential of the LHC experiments requires fast and efficient processing of the largest possible dataset with the most refined understanding of the detector ...conditions. To face this challenge, the CMS collaboration has setup an infrastructure for the continuous unattended computation of the alignment and calibration constants, allowing for a refined knowledge of the most time-critical parameters already a few hours after the data have been saved to disk. This is the prompt calibration framework which, since the beginning of the LHC Run-I, enables the analysis and the High Level Trigger of the experiment to consume the most up-to-date conditions optimizing the performance of the physics objects. In the Run-II this setup has been further expanded to include even more complex calibration algorithms requiring higher statistics to reach the needed precision. This imposed the introduction of a new paradigm in the creation of the calibration datasets for unattended workflows and opened the door to a further step in performance. The paper reviews the design of these automated calibration workflows, the operational experience in the Run-II and the monitoring infrastructure developed to ensure the reliability of the service.
Deep learning for inferring cause of data anomalies Azzolini, V.; Borisyak, M.; Cerminara, G. ...
Journal of physics. Conference series,
09/2018, Letnik:
1085, Številka:
4
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
Daily operation of a large-scale experiment is a resource consuming task, particularly from perspectives of routine data quality monitoring. Typically, data comes from different sub-detectors and the ...global quality of data depends on the combinatorial performance of each of them. In this paper, the problem of identifying channels in which anomalies occurred is considered. We introduce a generic deep learning model and prove that, under reasonable assumptions, the model learns to identify 'channels' which are affected by an anomaly. Such model could be used for data quality manager cross-check and assistance and identifying good channels in anomalous data samples. The main novelty of the method is that the model does not require ground truth labels for each channel, only global flag is used. This effectively distinguishes the model from classical classification methods. Being applied to CMS data collected in the year 2010, this approach proves its ability to decompose anomaly by separate channels.
The paper discusses whether bi-oriented PVC, obtained by modifying the structures of polymers chains to enhance the mechanical properties of unplasticized PVC, could successfully replace metallic ...materials in industrial applications where radioactive fluids are processed and an intense field of ionizing radiation is present. Tests have been carried out in order to study the behavior of a commercial bi-oriented PVC when exposed to ionizing radiations. A numerical simulation allows comparing the effects of radiation expected on the pipe in nuclear industry applications with those resulting from the irradiation tests. Contamination and decontamination tests of bi-oriented PVC in contact with a radioactive solution have been performed too. Results show that the bi-oriented PVC can withstand high β and γ radiation doses (up to 100kGy) without showing significant degradation in mechanical properties; bi-orientation of polymers chains in the bulk of material is not affected even to much higher doses (250kGy); the decontamination of the material is satisfactory. The results suggest that tested commercial bi-oriented PVC could be considered in nuclear industry applications.
•Bi-oriented PVC specimens have been irradiated with γ rays and β particles.•Up to 100kGy mechanical properties of bi-oriented PVC are practically unchanged.•A numerical simulation allows estimating PVC piping minimum lifetime.•Achieved decontamination factors of PVC piping are satisfactory.•Results suggest bi-oriented PVC piping is suitable for nuclear applications.
Monte Carlo Production Management at CMS Boudoul, G; Franzoni, G; Norkus, A ...
Journal of physics. Conference series,
01/2015, Letnik:
664, Številka:
7
Journal Article
Recenzirano
Odprti dostop
The analysis of the LHC data at the Compact Muon Solenoid (CMS) experiment requires the production of a large number of simulated events. During the RunI of LHC (20102012), CMS has produced over 12 ...Billion simulated events, organized in approximately sixty different campaigns each emulating specific detector conditions and LHC running conditions (pile up). In order to aggregate the information needed for the configuration and prioritization of the events production, assure the book-keeping of all the processing requests placed by the physics analysis groups, and to interface with the CMS production infrastructure, the web- based service Monte Carlo Management (McM) has been developed and put in production in 2013. McM is based on recent server infrastructure technology (CherryPy + AngularJS) and relies on a CouchDB database back-end. This contribution covers the one and half year of operational experience managing samples of simulated events for CMS, the evolution of its functionalities and the extension of its capability to monitor the status and advancement of the events production.