Progress on development of the new FDIRC PID detector Va'vra, J.; Arnaud, N.; Barnyakov, A.Yu ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
08/2013, Letnik:
718
Journal Article
Recenzirano
Odprti dostop
We present a progress status of a new concept of PID detector called FDIRC, intended to be used at the SuperB experiment, which requires π/K separation up to a few GeV/c. The new photon camera is ...made of the solid fused-silica optics with a volume 25× smaller and speed increased by a factor of 10 compared to the BaBar DIRC, and therefore will be much less sensitive to electromagnetic and neutron background.
Study of H-8500 MaPMT for the FDIRC detector at SuperB Gargano, F.; Arnaud, N.; Barnyakov, A.Yu ...
Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment,
08/2013, Letnik:
718
Journal Article, Conference Proceeding
Recenzirano
An overview of ongoing studies on the Hamamatsu H-8500 Multi-Anode Photomultiplier (MaPMT) is presented. This device will be used for the FDIRC Particle Identification Detector (PID) of the SuperB ...experiment. The H-8500 MaPMT has been chosen for its excellent single photon timing capabilities and its highly pixilated design. Results on timing studies, gain uniformity, single photoelectron detection efficiency uniformity and cross-talk are presented.
The SuperB experiment needs large samples of MonteCarlo simulated events in order to finalize the detector design and to estimate the data analysis performances. The requirements are beyond the ...capabilities of a single computing farm, so a distributed production model capable of exploiting the existing HEP worldwide distributed computing infrastructure is needed. In this paper we describe the set of tools that have been developed to manage the production of the required simulated events. The production of events follows three main phases: distribution of input data files to the remote site Storage Elements (SE); job submission, via SuperB GANGA interface, to all available remote sites; output files transfer to CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping. A replication mechanism allows storing the job output on the local site SE. Results from 2010 official productions are reported.
The SuperB asymmetric energy e+e− collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavour sector of the ...Standard Model. SuperB distributed computing group performed a detailed evaluation of DIRAC Distributed Infrastructure for use in the SuperB experiment based on the two use cases: End User Analysis and Monte Carlo Production. Test aims to evaluate DIRAC capabilities to manage both gLite and OSG sites, File Catalog management, job and data management features in SuperB realistic use cases.
The SuperB asymmetric energy e+e−- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the ...Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−-1 and a luminosity target of 1036 cm−-2 s−-1. This luminosity translate in the requirement of storing more than 50 PByte of additional data each year, making SuperB an interesting challenge to the data management infrastructure, both at site level as at Wide Area Network level. A new Tier1, distributed among 3 or 4 sites in the south of Italy, is planned as part of the SuperB computing infrastructure. Data storage is a relevant topic whose development affects the way to configure and setup storage infrastructure both in local computing cluster and in a distributed paradigm. In this work we report the test on the software for data distribution and data replica focusing on the experiences made with Hadoop and GlusterFS.
SuperB Simulation Production System Tomassetti, L; Bianchi, F; Ciaschini, V ...
Journal of physics. Conference series,
01/2012, Letnik:
396, Številka:
2
Journal Article
Recenzirano
Odprti dostop
The SuperB asymmetric e+e− collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard Model. ...Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−1 and a peak luminosity of 1036 cm−2 s−1. The SuperB Computing group is working on developing a simulation production framework capable to satisfy the experiment needs. It provides access to distributed resources in order to support both the detector design definition and its performance evaluation studies. During last year the framework has evolved from the point of view of job workflow, Grid services interfaces and technologies adoption. A complete code refactoring and sub-component language porting now permits the framework to sustain distributed production involving resources from two continents and Grid Flavors. In this paper we will report a complete description of the production system status of the art, its evolution and its integration with Grid services; in particular, we will focus on the utilization of new Grid component features as in LB and WMS version 3. Results from the last official SuperB production cycle will be reported.
The SuperB asymmetric-energy e+e− collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavour sector of the ...Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75ab−1 and a luminosity target of 1036cm−2s−1. These parameters require a substantial growth in computing requirements and performances. The SuperB collaboration is thus investigating the advantages of new CPU architectures (multi and many cores) and how to exploit their capability of task parallelization in the framework for simulation and analysis software. In this work we present the underlying architecture which we intend to use and some preliminary performance results of the first framework prototype.
The SuperB asymmetric energy e+e− collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the Standard ...Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−1 and a luminosity target of 1036cm−2s−1. The increasing network performance also in the Wide Area Network environment and the capability to read data remotely with good efficiency are providing new possibilities and opening new scenarios in the data access field. Subjects like data access and data availability in a distributed environment are key points in the definition of the computing model for an HEP experiment like SuperB. R&D efforts in such a field have been brought on during the last year in order to release the Computing Technical Design Report within 2013. WAN direct access to data has been identified as one of the more interesting viable option; robust and reliable protocols as HTTP/WebDAV and xrootd are the subjects of a specific R&D line in a mid-term scenario. In this work we present the R&D results obtained in the study of new data access technologies for typical HEP use cases, focusing on specific protocols such as HTTP and WebDAV in Wide Area Network scenarios. Reports on efficiency, performance and reliability tests performed in a data analysis context have been described. Future R&D plan includes HTTP and xrootd protocols comparison tests, in terms of performance, efficiency, security and features available.
The SuperB experiment needs large samples of Monte-Carlo simulated events in order to finalize the detector design and to estimate the data analysis performances. This work describes the system we ...developed to manage the production of the required simulated events in a fully distributed environment. The distributed infrastructure includes several sites in Europe and North America and is based on Grid services. The production of simulated events consists of: distribution of input data files to the remote site Storage Elements (SE), job submission to all available remote sites, output data transfer to the INFN-CNAF repository. The job workflow includes procedures for consistency checking, monitoring, data handling and bookkeeping metadata communication. A data bookkeeping system has been implemented in order to maintain the information associated to data files and keep track of the relations between executed jobs and their parameters and outputs. The distributed production system is operational since February 2010. Results from the first production cycles (Spring 2010 and Summer 2010) are reported.
This work focuses on the architectural, methodological and technological aspect of handling huge amounts of data. In this summary we particularly focus our attention on the description of a special ...system built to support large scale data access. The work comes from the need to develop a special purpose skimming control system; this system has been designed as a collaboration between the Stanford Linear Accelerator Center (SLAC, USA), and the Istituto Nazionale di Fisica Nucleare (INFN, National Istitute of Nuclear Physic, Padua, Italy). The goal was to provide the handle of more than 10
7 files, representing Physics data collected by the BaBar experiment.