CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these ...resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type ...from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.
CMS computing operations during run 1 Adelman, J; Alderweireldt, S; Artieda, J ...
Journal of physics. Conference series,
01/2014, Letnik:
513, Številka:
3
Journal Article
Recenzirano
Odprti dostop
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. ...Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.
CMS experiment utilizes distributed computing infrastructure and its performance heavily depends on the fast and smooth distribution of data between different CMS sites. Data must be transferred from ...the Tier-0 (CERN) to the Tier-1s for processing, storing and archiving, and time and good quality are vital to avoid overflowing CERN storage buffers. At the same time, processed data has to be distributed from Tier-1 sites to all Tier-2 sites for physics analysis while Monte Carlo simulations sent back to Tier-1 sites for further archival. At the core of all transferring machinery is PhEDEx (Physics Experiment Data Export) data transfer system. It is very important to ensure reliable operation of the system, and the operational tasks comprise monitoring and debugging all transfer issues. Based on transfer quality information Site Readiness tool is used to create plans for resources utilization in the future. We review the operational procedures created to enforce reliable data delivery to CMS distributed sites all over the world. Additionally, we need to keep data and meta-data consistent at all sites and both on disk and on tape. In this presentation, we describe the principles and actions taken to keep data consistent on sites storage systems and central CMS Data Replication Database (TMDB/DBS) while ensuring fast and reliable data samples delivery of hundreds of terabytes to the entire CMS physics community.
The CMS experiment has to move Petabytes of data among dozens of computing centres with low latency in order to make efficient use of its resources. Transfer operations are well established to ...achieve the desired level of throughput, but operators lack a system to identify early on transfers that will need manual intervention to reach completion. File transfer latencies are sensitive to the underlying problems in the transfer infrastructure, and their measurement can be used as prompt trigger for preventive actions. For this reason, PhEDEx, the CMS transfer management system, has recently implemented a monitoring system to measure the transfer latencies at the level of individual files. For the first time now, the system can predict the completion time for the transfer of a data set. The operators can detect abnormal patterns in transfer latencies early, and correct the issues while the transfer is still in progress. Statistics are aggregated for blocks of files, recording a historical log to monitor the long-term evolution of transfer latencies, which are used as cumulative metrics to evaluate the performance of the transfer infrastructure, and to plan the global data placement strategy. In this contribution, we present the typical patterns of transfer latencies that may be identified with the latency monitor, and we show how we are able to detect the sources of latency arising from the underlying infrastructure (such as stuck files) which need operator intervention.
In the context of the development of radiation hard silicon microstrip detectors for the CMS Tracker, we have investigated the dependence of interstrip and backplane capacitance as well as depletion ...and breakdown voltage on the design parameters and substrate characteristics of the devices. Measurements have been made for strip pitches between 60 and
240
μm
and various strip implants and metal widths, using multi-geometry devices, fabricated on wafers of either
〈1
1
1〉
or
〈1
0
0〉
crystal orientation, of resistivities between 1 and
6
kΩ
cm
and of thicknesses between 300 and
410
μm
. The effect of irradiation on properties of devices has been studied with
24
GeV/c
protons up to a fluence of
4.3×10
14
cm
−2
.
CMS experiment will use resistive plate chambers (RPCs) as dedicated muon trigger detectors. This requires good chamber global and local performance. To verify the chamber performance intensive tests ...are going on using a telescope installed at Bari Physics Department. The chamber efficiency is obtained by track reconstruction, which offers also the possibility to perform local efficiency studies. A brief description of the test set-up, reconstruction algorithm and test results are presented in this paper.
Resistive Plate Chambers have been chosen as dedicated trigger muon detectors for the Compact Muon Solenoid experiment at the Large Hadron Collider at CERN.
The barrel RPC detector consists of 480 ...chambers of different forms and sizes, equipped with 75,000 strips and covering an area of about 2400 m
2
.
About one-third of RPC barrel chambers have been produced up to the end of 2003 and these 150 chambers, produced and assembled in Italy, have been extensively tested at the two Italian test stands of Bari and Pavia by the RPC barrel collaboration. Preliminary results of the production and test of the chambers will be described here.
Radiation tests with foxfet biased microstrip detectors Hammarstrom, R; Kellogg, R; Mannelli, M ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
11/1998, Letnik:
418, Številka:
1
Journal Article
Recenzirano
The silicon detectors at the future Large Hadron Collider (LHC) at CERN have to survive large particle fluxes up to a few 10
14 particles per cm
2. These high fluxes cause dramatic changes in the ...behaviour of the silicon detectors, like inversion of n-type silicon to p-type silicon. Here, we report on the high-voltage behaviour of silicon mictrostrip detectors up to doses of about 10
14 particles/cm
2, and the changes in the depletion voltage and inter-strip capacitance. The CMS baseline choice for the biasing element of the AC-coupled microstrip detectors is a polysilicon resistor. The silicon detectors, tested here, are Foxfet biased. We measured the changes in the Foxfet characteristics. Such detectors have been reported to show, after irradiation, a noise which is higher than expected. Using a fast amplifier (PREMUX chip), we also measure a higher noise.