The Cherenkov Telescope Array (CTA) - an array of several tens of Cherenkov telescopes - is the next generation of ground-based instrument in the field of very high energy gamma-ray astronomy. The ...CTA observatory is expected to produce a main data stream for permanent storage of the order of 1-to-5 GB/s for about 1000 hours of observation per year, thus producing a total data volume of the order of several PB per year. The CPU time needed to calibrate and process one hour of data taking will be of the order of some thousands CPU hours with current technology. The high data rate of CTA, together with the large computing power requirements for Monte Carlo (MC) simulations, need dedicated computing resources. Massive MC simulations are needed to study the physics of cosmic-ray atmospheric showers as well as telescope response and performance for different detectors and layout configurations. Given these large storage and computing requirements, the Grid approach is well suited, and a vast number of MC simulations are already running on the European Grid Infrastructure (EGI). In order to optimize resource usage and to handle all production and future analysis activities in a coherent way, a high-level framework with advanced functionalities is desirable. For this purpose we have preliminarly evaluated the DIRAC framework for distributed computing and tested it for the CTA workload and data management systems. In this paper we present a possible implementation of a Distributed Computing Infrastructure (DCI) Computing Model for CTA as well as the benchmark test results of DIRAC.
The Cherenkov Telescope Array (CTA) project is an initiative to build the next generation ground-based very high energy (VHE) gamma-ray instrument. Compared to current imaging atmospheric Cherenkov ...telescope experiments CTA will extend the energy range and improve the angular resolution while increasing the sensitivity up to a factor of 10. With about 100 separate telescopes it will be operated as an observatory open to a wide astrophysics and particle physics community, providing a deep insight into the non-thermal high-energy universe. The CTA Array Control system (ACTL) is responsible for several essential control tasks supporting the evaluation and selection of proposals, as well as the preparation, scheduling, and finally the execution of observations with the array. A possible basic distributed software framework for ACTL being considered is the ALMA Common Software (ACS). The ACS framework follows a container component model and contains a high level abstraction layer to integrate different types of device. To achieve a low-level consolidation of connecting control hardware, OPC UA (OPen Connectivity-Unified Architecture) client functionality is integrated directly into ACS, thus allowing interaction with other OPC UA capable hardware. The CTA Data Acquisition System comprises the data readout of all cameras and the transfer of the data to a camera server farm, thereby using standard hardware and software technologies. CTA array control is also covering conceptions for a possible array trigger system and the corresponding clock distribution. The design of the CTA observations scheduler is introducing new algorithmic technologies to achieve the required flexibility.
The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in ...detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction.
Workflow management in the assembly of CMS ECAL Baker, N.; Bazan, A.; Estrella, F. ...
Computer physics communications,
05/1998, Letnik:
110, Številka:
1-3
Journal Article, Conference Proceeding
Recenzirano
As with all experiments in the LHC era, the Compact Muon Solenoid (CMS) detectors will be constituted of a very large number of constituent parts. Typically, each major detector may be constructed ...out of over a million precision parts and will be produced and assembled during the next decade by specialised centres distributed world-wide. Each constituent part of each detector must be accurately measured and tested locally prior to its ultimate assembly and integration in the experimental area at CERN. Much of the information collected during this phase will be needed not only to construct the detector, but for its calibration, to facilitate accurate simulation of its performance and to assist in its lifetime maintenance. The CRISTAL system is a prototype being developed to monitor and control the production and assembly process of the CMS Electromagnetic Calorimeter (ECAL). The software will be generic in design and hence reusable for other CMS detector groups. This paper discusses the distributed computing problems and design issues posed by this project. The overall software design architecture is described together with the main technology aspects of linking distributed object oriented databases via CORBA with WWW/Java-based query processing. The paper then concentrates on the design of the workflow management system of CRISTAL.
The Cherenkov Telescope Array (CTA) is the next-generation atmospheric Cherenkov gamma-ray observatory. CTA will consist of two installations, one in the northern, and the other in the southern ...hemisphere, containing tens of telescopes of different sizes. The CTA performance requirements and the inherent complexity associated with the operation, control and monitoring of such a large distributed multi-telescope array leads to new challenges in the field of the gamma-ray astronomy. The ACTL (array control and data acquisition) system will consist of the hardware and software that is necessary to control and monitor the CTA arrays, as well as to time-stamp, read-out, filter and store -at aggregated rates of few GB/s- the scientific data. The ACTL system must be flexible enough to permit the simultaneous automatic operation of multiple sub-arrays of telescopes with a minimum personnel effort on site. One of the challenges of the system is to provide a reliable integration of the control of a large and heterogeneous set of devices. Moreover, the system is required to be ready to adapt the observation schedule, on timescales of a few tens of seconds, to account for changing environmental conditions or to prioritize incoming scientific alerts from time-critical transient phenomena such as gamma ray bursts. This contribution provides a summary of the main design choices and plans for building the ACTL system.
At a time when many companies are embracing business process re-engineering and are under pressure to reduce "time-to-market", the management of product information from creative design through to ...manufacture has become increasingly important. Traditionally, design engineers have employed product data management systems to coordinate and control access to documented versions of product designs. However, these systems provide control only at the collaborative design level and are seldom used beyond design. Workflow management systems, on the other hand, are employed to coordinate and support the more complex and repeatable work processes of the production environment. Most commercial workflow products cannot support the highly dynamic activities found both in the design stages of product development and in rapidly evolving workflow definitions. The integration of product data management with workflow management could provide support for product development from initial CAD/CAM collaborative design through to the support and optimisation of production workflow activities. This paper investigates such an integration and proposes a philosophy for the support of product data throughout the full development and production lifecycle.
The Cherenkov Telescope Array (CTA) \cite{CTA:2010} will be the successor to current Imaging Atmospheric Cherenkov Telescopes (IACT) like H.E.S.S., MAGIC and VERITAS. CTA will improve in sensitivity ...by about an order of magnitude compared to the current generation of IACTs. The energy range will extend from well below 100 GeV to above 100 TeV. To accomplish these goals, CTA will consist of two arrays, one in each hemisphere, consisting of 50-80 telescopes and composed of three different telescope types with different mirror sizes. It will be the first open observatory for very high energy \(\gamma\)-ray astronomy. The Array Control working group of CTA is currently evaluating existing technologies which are best suited for a project like CTA. The considered solutions comprise the ALMA Common Software (ACS), the OPC Unified Architecture (OPC UA) and the Data Distribution Service (DDS) for bulk data transfer. The first applications, like an automatic observation scheduler and the control software for some prototype instrumentation have been developed.
In July 2012, as the four ground-based gamma-ray telescopes of the H.E.S.S. (High Energy Stereoscopic System) array reached their tenth year of operation in Khomas Highlands, Namibia, a fifth ...telescope took its first data as part of the system. This new Cherenkov detector, comprising a 614.5 m^2 reflector with a highly pixelized camera in its focal plane, improves the sensitivity of the current array by a factor two and extends its energy domain down to a few tens of GeV. The present part I of the paper gives a detailed description of the fifth H.E.S.S. telescope's camera, presenting the details of both the hardware and the software, emphasizing the main improvements as compared to previous H.E.S.S. camera technology.