IGUANA: a high-performance 2D and 3D visualisation system Alverson, G.; Eulisse, G.; Muzaffar, S. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
11/2004, Letnik:
534, Številka:
1
Journal Article
Recenzirano
The IGUANA project has developed visualisation tools for multiple high-energy experiments. At the core of IGUANA is a generic, high-performance visualisation system based on OpenInventor and OpenGL. ...This paper describes the back-end and a feature-rich 3D visualisation system built on it, as well as a new 2D visualisation system that can automatically generate 2D views from 3D data, for example to produce R/Z or X/Y detector displays from existing 3D display with little effort.
IGUANA has collaborated with the open-source gl2ps project to create a high-quality vector postscript output that can produce true vector graphics output from any OpenGL 2D or 3D display, complete with surface shading and culling of invisible surfaces. We describe how it works.
We also describe how one can measure the memory and performance costs of various OpenInventor constructs and how to test scene graphs. We present good patterns to follow and bad patterns to avoid. We have added more advanced tools such as per-object clipping, slicing, lighting or animation, as well as multiple linked views with OpenInventor, and describe them in this paper. We give details on how to edit object appearance efficiently and easily, and even dynamically as a function of object properties, with instant visual feedback to the user.
A coherent environment of software improvement tools for CMS Eulisse, G.; Muzaffar, S.; Osborne, I. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
11/2004, Letnik:
534, Številka:
1
Journal Article
Recenzirano
CMS has developed approximately one million lines of C++ code and uses many more from HEP, Grid and public domain projects. We describe a suite of tools which help to manage this complexity by ...measuring software dependencies, quality metrics, and CPU and memory performance. This coherent environment integrates and extends existing open-source tools where possible and provides new in-house components where a suitable solution does not already exist.
This is a freely available environment with graphical user interface which can be run on any software without the need to recompile or instrument it. We have developed ignominy which performs software dependency analysis of source code, binary products and external software. CPU profiling is provided based on oprofile, with added features such as profile snapshots, distributed profiling and aggregate profiles for farm systems including server-side tools for collecting profile data. Finally, we have developed a low-overhead performance and memory profiling tool, MemProf, which can perform (gprof-style) hierarchical performance profiling, in a way that works with multiple threads and dynamically loaded libraries (unlike gprof). It also gathers exact memory allocation profiles including which code allocates most, in what sizes of chunks, for how long, where the memory is getting freed and where it is getting leaked.
We describe this tool suite and how it has been used to enhance the quality of CMS software.
Monte Carlo production in CMS has received a major boost in performance and scale since the past CHEP06 conference. The production system has been re-engineered in order to incorporate the experience ...gained in running the previous system and to integrate production with the new CMS event data model, data management system and data processing framework. The system is interfaced to the two major computing Grids used by CMS, the LHC Computing Grid (LCG) and the Open Science Grid (OSG). Operational experience and integration aspects of the new CMS Monte Carlo production system is presented together with an analysis of production statistics. The new system automatically handles job submission, resource monitoring, job queuing, job distribution according to the available resources, data merging, registration of data into the data bookkeeping, data location, data transfer and placement systems. Compared to the previous production system automation, reliability and performance have been considerably improved. A more efficient use of computing resources and a better handling of the inherent Grid unreliability have resulted in an increase of production scale by about an order of magnitude, capable of running in parallel at the order of ten thousand jobs and yielding more than two million events per day.
The CMS experiment is currently developing a computing system capable of serving, processing and archiving the large number of events that will be generated when the CMS detector starts taking data. ...During 2004 CMS undertook a large scale data challenge to demonstrate the ability of the CMS computing system to cope with a sustained data-taking rate equivalent to 25% of startup rate. Its goals were: to run CMS event reconstruction at CERN for a sustained period at 25 Hz input rate; to distribute the data to several regional centers; and enable data access at those centers for analysis. Grid middleware was utilized to help complete all aspects of the challenge. To continue to provide scalable access from anywhere in the world to the data, CMS is developing a layer of software that uses Grid tools to gain access to data and resources, and that aims to provide physicists with a user friendly interface for submitting their analysis jobs. This paper describes the data challenge experience with Grid infrastructure and the current development of the CMS analysis system.
The event display and data quality monitoring visualisation systems are especially crucial for commissioning CMS in the imminent CMS physics run at the LHC. They have already proved invaluable for ...the CMS magnet test and cosmic challenge. We describe how these systems are used to navigate and filter the immense amounts of complex event data from the CMS detector and prepare clear and flexible views of the salient features to the shift crews and offline users. These allow shift staff and experts to navigate from a top-level general view to very specific monitoring elements in real time to help validate data quality and ascertain causes of problems. We describe how events may be accessed in the higher level trigger filter farm, at the CERN Tier-0 centre, and in offsite centres to help ensure good data quality at all points in the data processing workflow. Emphasis has been placed on deployment issues in order to ensure that experts and general users may use the visualization systems at CERN, in remote operations and monitoring centres offsite, and from their own desktops.
The CMS production system has undergone a major architectural upgrade from its predecessor, with the goal of reducing the operational manpower needed and preparing for the large scale production ...required by the CMS physics plan. The new production system is a tiered architecture that facilitates robust and distributed production request processing and takes advantage of the multiple Grid and farm resources available to the CMS experiment.
Monte Carlo production for the CMS experiment is carried out in a distributed computing environment; the goal of producing 30M simulated events per month in the first half of 2007 has been reached. A ...brief overview of the production operations and statistics is presented.
The results of experimental dosimetry carried out with beryllium oxide thermoluminiscent material (BeO TLD) are presented. In particular, this material shows a good linearity of response to UV ...radiation at 365 nm, up to 200 mJ/cm2, and a spectral sensitivity with a peak at 340 nm. The advantages and disadvantages of BeO TLD in comparison with solid state detectors are discussed and suggested for personal and environmental dosimetry of UVA radiation in photochemotherapy.