The CMS all-silicon Tracker, comprising 16588 modules covering an area of more than 200m 2 , needs to be precisely calibrated and aligned in order to correctly interpret and reconstruct the events ...recorded from the detector, ensuring that the performance fully meets the physics research program of the CMS experiment. The performance has been carefully studied since the start of data taking: the noise of the detector, the data integrity, the S/N ratio, the hit resolution and efficiency have been all investigated with time. In 2010 the Tracker has been successfully aligned using tracks from cosmic rays and pp-collisions, following the time dependent movements of its innermost pixel layers. Ultimate local precision is now achieved by the determination of sensor curvatures, challenging the algorithms to determine about 200000 parameters. Remaining alignment uncertainties are dominated by systematic effects that are controlled by adding further information, such as constraints from resonance decays.
The CMS software framework (CMSSW) is a complex project evolving very rapidly as the first LHC colliding beams approach. The computing requirements constrain performance in terms of CPU time, memory ...footprint and event size on disk to allow for planning and managing the computing infrastructure necessary to handle the needs of the experiment. A performance suite of tools has been developed to track all aspects of code performance, through the software release cycles, allowing for regression and guiding code development for optimization. In this talk, we describe the CMSSW performance suite tools used and present some sample performance results from the release integration process for the CMS software.
The demanding computing needs of the CMS experiment require thoughtful planning and management of its computing infrastructure. A key factor in this process is the use of realistic benchmarks when ...assessing the computing power of the different architectures available. In recent years a discrepancy has been observed between the CPU performance estimates given by the reference benchmark for HEP computing (SPECint 1) and actual performances of HEP code. Making use of the CPU performance tools from the CMSSW performance suite, comparative CPU performance studies have been carried out on several architectures. A benchmarking suite has been developed and integrated in the CMSSW framework, to allow computing centers and interested third parties to benchmark architectures directly with CMSSW. The CMSSW benchmarking suite can be used out of the box, to test and compare several machines in terms of CPU performance and report with the wanted level of detail the different benchmarking scores (e.g. by processing step) and results. In this talk we describe briefly the CMSSW software performance suite, and in detail the CMSSW benchmarking suite client/server design, the performance data analysis and the available CMSSW benchmark scores. The experience in the use of HEP code for benchmarking will be discussed and CMSSW benchmark results presented.
The CMS software framework (CMSSW) is a modular object-oriented data analysis framework enabling the CMS collaboration to process and analyze the fast growing LHC collision data set. A software ...performance suite of tools has been developed and integrated in CMSSW itself to keep track of CPU time, memory footprint and event size on disk. These three metrics are key constraints in software development in order to meet the requirements considered in the planning and management of the CMS computing infrastructure. The performance suite allows the measurement and tracking of the performance across the framework, storing the results in a dedicated database. A web application is deployed to publish the results, making them easily accessible to software release managers and allowing for automatic integration in CMSSW release cycle quality assurance. The performance suite is also available to individual developers for dedicated code optimization and the web application allows historic regression and comparisons across releases. The performance suite tools and the performance of the CMSSW framework during the first LHC collisions years are described in this paper.
The SPEC1 CINT benchmark has been used as a performance reference for computing in the HEP community for the past 20 years. The SPECint_base2000 (SI2K) unit of performance has been used by the major ...HEP experiments both in the Computing Technical Design Report for the LHC experiments and in the evaluation of the Computing Centres. At recent HEPiX3 meetings several HEP sites have reported disagreements between actual machine performances and the scores reported by SPEC. Our group performed a detailed comparison of Simulation and Reconstruction code performances from the four LHC experiments in order to find a successor to the SI2K benchmark. We analyzed the new benchmarks from SPEC CPU2006 suite, both integer and floating point, in order to find the best agreement with the HEP code behaviour, with particular attention paid to reproducing the actual environment of HEP farm i.e., each job running independently on each core, and matching compiler, optimization, percentage of integer and floating point operations, and ease of use.
The growing role of data science (DS) and machine learning (ML) in high-energy physics (HEP) is well established and pertinent given the complex detectors, large data, sets and sophisticated analyses ...at the heart of HEP research. Moreover, exploiting symmetries inherent in physics data have inspired physics-informed ML as a vibrant sub-field of computer science research. HEP researchers benefit greatly from materials widely available materials for use in education, training and workforce development. They are also contributing to these materials and providing software to DS/ML-related fields. Increasingly, physics departments are offering courses at the intersection of DS, ML and physics, often using curricula developed by HEP researchers and involving open software and data used in HEP. In this white paper, we explore synergies between HEP research and DS/ML education, discuss opportunities and challenges at this intersection, and propose community activities that will be mutually beneficial.
A search for stable and long-lived massive particles of electric charge |Q/e| = 1 or fractional charges of 2/3, 4/3 and 5/3, produced in electron-positron collisions is reported in this thesis. The ...search is performed on data collected at center of mass energies from 130 to 209 GeV, by the OPAL detector at LEP. The existence of such particles is predicted by several supersymmetric theoretical scenarios beyond the Standard Model of particle physics. Because of their charge and high mass, such particles would yield an anomalous ionization, dE/dx. The data analysis presented is a topological search based on the very accurate dE/dx measurement of the OPAL detector. These massive charged particles are assumed to be pair-produced in electron-positron collisions and not to interact strongly. No evidence of the production of these particles is observed. Therefore model-independent upper limits on the production cross-section are derived. The results are also interpreted within the framework of the Constrained Minimal Supersymmetric Model (C-MSSM), obtaining mass lower limits for scalar muons and scalar taus. All mass and cross-section limits are derived at 95% confidence level and are valid for particles with lifetimes longer than 10−6 s.
AIP Conf.Proc. 903 (2007) no.1, 209-212 In gauge-mediated supersymmetry (SUSY) breaking (GMSB) models the lightest
supersymmetric particle (LSP) is the gravitino and the phenomenology is driven
by ...the nature of the next-to-lightest SUSY particle (NLSP) which is either the
lightest neutralino, the stau or mass degenerate sleptons. Since the NLSP decay
length is effectively unconstrained, searches for all possible lifetime and
NLSP topologies predicted by GMSB models in e+e- collisions are performed on
the data sample collected by OPAL at centre-of-mass energies up to 209 GeV at
LEP. Results independent of the NLSP lifetime are presented for all relevant
final states including direct NLSP pair-production and, for the first time,
also NLSP production via cascade decays of heavier SUSY particles. None of the
searches shows evidence for SUSY particle production. Cross-section limits are
presented at the 95% confidence level both for direct NLSP production and for
cascade decays, providing the most general, almost model independent results.
These results are then interpreted in the framework of the minimal GMSB (mGMSB)
model, where large areas of the accessible parameter space are excluded. In the
mGMSB model, the NLSP masses are constrained to be larger than 53.5 GeV/c^2,
87.4 GeV/c^2 and 91.9 GeV/c^2 in the neutralino, stau and slepton co-NLSP
scenarios, respectively. A complete scan on the parameters of the mGMSB model
is performed, constraining the universal SUSY mass scale Lambda from the direct
SUSY particle searches: Lambda > 40, 27, 21, 17, 15 TeV/c^2 for messenger
indices N=1, 2, 3, 4, 5 respectively, for all NLSP lifetimes.