The INFN CNAF Tier-1 has become the Italian national data center for the INFN computing activities since 2005. As one of the reference sites for data storage and computing provider in the High Energy ...Physics (HEP) community it offers resources to all the four LHC experiments and many other HEP and non-HEP collaborations. The CDF experiment has used the INFN Tier-1 resources for many years and, after the end of data taking in 2011, it faced the challenge to both preserve the large amount of scientific data produced and give the possibility to access and reuse the whole information in the future using the specific computing model. For this reason starting from the end of 2012 the CDF Italian collaboration, together with the INFN CNAF and Fermilab (FNAL), introduced a Long Term Data Preservation (LTDP) project with the purpose of preserving and sharing all the CDF data and the related analysis framework and knowledge. This is particularly challenging since part of the software releases is no longer supported and the amount of data to be preserved is rather large. The first objective of the collaboration was the copy of all the CDF RUN-2 raw data and user level ntuples (about 4 PB) from FNAL to the INFN CNAF tape library backend using a dedicated network link. This task was successfully accomplished during the last years and, in addition, a system to implement regular integrity check of data has been developed. This system ensures that all the data are completely accessible and it can automatically retrieve an identical copy of problematic or corrupted file from the original dataset at FNAL. The setup of a dedicated software framework, which allows users to access and analyse the data with the complete CDF analysis chain, was also carried out with the addition of users and system administrators detailed documentation for the long-term future. Furthermore a second and more ambitious objective emerged during 2016 with a feasibility study for reading the first CDF RUN-1 dataset now stored as an unique copy in a huge amount (about 4000) of old Exabyte tape cartridges. With the installation of compatible refurbished tape drive autoloaders an initial test bed was completed and the first phase of the Exabyte tapes reading activity started. In the present article, we will illustrate the state of the art of the LTDP project with a particular attention to the technical solutions adopted in order to store and maintain the CDF data and the analysis framework, and to overcome the issues that have arisen during the recent activities. The CDF model could also prove useful for designing new data preservation projects for other experiments or use cases.
In view of Run3 (2020) the LHCb experiment is planning a major upgrade to fully readout events at 40 MHz collision rate. This in order to highly increase the statistic of the collected samples and go ...further in precision beyond Run2. An unprecedented amount of data will be produced, which will be fully reconstructed real-time to perform fast selection and categorization of interesting events. The collaboration has decided to go for a fully software trigger which will have a total time budget of 13 ms to take a decision. This calls for faster hardware and software. In this talk we will present our efforts on the application of new technologies, such as GPU cards, for the future LHCb trigger system. During Run2 a node equipped with a GPU has been inserted in the LHCb online monitoring system; during normal data taking, a subset of real events is sent to the node and processed in parallel by GPU-based and CPU-based track reconstruction algorithms. This gives us the unique opportunity to test the new hardware and the new algorithms in a realistic environment. We will present the setup of the testbed, the algorithms developed for parallel architectures and discuss the performance compared to the current LHCb track reconstruction algorithms.
The CDF experiment at Fermilab ended its Run-II phase on September 2011 after 11 years of operations and 10 fb−1 of collected data. CDF computing model is based on a Central Analysis Farm (CAF) ...consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.
Interest in parallel architectures applied to real time selections is growing in High Energy Physics (HEP) experiments. In this paper we describe performance measurements of Graphic Processing Units ...(GPUs) and Intel Many Integrated Core architecture (MIC) when applied to a typical HEP online task: the selection of events based on the trajectories of charged particles. We use as benchmark a scaled-up version of the algorithm used at CDF experiment at Tevatron for online track reconstruction – the SVT algorithm – as a realistic test-case for low-latency trigger systems using new computing architectures for LHC experiment. We examine the complexity/performance trade-off in porting existing serial algorithms to many-core devices. Measurements of both data processing and data transfer latency are shown, considering different I/O strategies to/from the parallel devices.
Long-term preservation of experimental data (intended as both raw and derived formats) is one of the emerging requirements coming from scientific collaborations. Within the High Energy Physics ...community the Data Preservation in High Energy Physics (DPHEP) group coordinates this effort. CNAF is not only one of the Tier-1s for the LHC experiments, it is also a computing center providing computing and storage resources to many other HEP and non-HEP scientific collaborations, including the CDF experiment. After the end of data taking in 2011, CDF is now facing the challenge to both preserve the large amount of data produced during several years of data taking and to retain the ability to access and reuse it in the future. CNAF is heavily involved in the CDF Data Preservation activities, in collaboration with the Fermilab National Laboratory (FNAL) computing sector. At the moment about 4 PB of data (raw data and analysis-level ntuples) are starting to be copied from FNAL to the CNAF tape library and the framework to subsequently access the data is being set up. In parallel to the data access system, a data analysis framework is being developed which allows to run the complete CDF analysis chain in the long term future, from raw data reprocessing to analysis-level ntuple production. In this contribution we illustrate the technical solutions we put in place to address the issues encountered as we proceeded in this activity.
A pixel-segmented ionization chamber has been designed and built by Torino University and INFN. The detector features a
24×24
cm
2
active area divided in 1024 independent cylindrical ionization ...chambers and can be read out in 500 μs without introducing dead time; the digital charge quantum can be adjusted between 100 fC and 800 fC. The sensitive volume of each single ionization chamber is
0.07
cm
3
.
The purpose of the detector is to ease the two-dimensional (2D) verifications of fields with complex shapes and large gradients. The detector was characterized in a PMMA phantom using
60
Co
and 6 MV x-ray photon beams. It has shown good signal linearity with respect to dose and dose rate to water. The average sensitivity of a single ionization chamber was 2.1 nC/Gy, constant within 0.5% over one month of daily measurements. Charge collection efficiency was 0.985 at the operating polarization voltage of 400 V and 3.5 Gy/min dose rate. Tissue maximum ratio and output factor have been compared with a Farmer ionization chamber and were found in good agreement. The dose profiles have been compared with the ones obtained with an ionization chamber in water phantom for the field sizes supplied by a 3D-Line dynamic multileaf collimator. These results show that this detector can be used for 2D dosimetry of x-ray photon beams, supply- ing a good spatial resolution and sensibly reducing the time spent in dosimetric verification of complex radiation fields.
The CDF experiment at Fermilab ended its Run-11 phase on September 2011 after 11 years of operations and 10 ft super(-1) of collected data. CDF computing model is based on a Central Analysis Farm ...(CAF) consisting of local computing and storage resources, supported by OSG and LCG resources accessed through dedicated portals. At the beginning of 2011 a new portal, Eurogrid, has been developed to effectively exploit computing and disk resources in Europe: a dedicated farm and storage area at the TIER-1 CNAF computing center in Italy, and additional LCG computing resources at different TIER-2 sites in Italy, Spain, Germany and France, are accessed through a common interface. The goal of this project is to develop a portal easy to integrate in the existing CDF computing model, completely transparent to the user and requiring a minimum amount of maintenance support by the CDF collaboration. In this paper we will review the implementation of this new portal, and its performance in the first months of usage. Eurogrid is based on the glideinWMS software, a glidein based Workload Management System (WMS) that works on top of Condor. As CDF CAF is based on Condor, the choice of the glideinWMS software was natural and the implementation seamless. Thanks to the pilot jobs, user-specific requirements and site resources are matched in a very efficient way, completely transparent to the users. Official since June 2011, Eurogrid effectively complements and supports CDF computing resources offering an optimal solution for the future in terms of required manpower for administration, support and development.
Purpose
Our aim was to allocate a digital mammography unit to the screening programme on the basis of the ALARA (as low as reasonably achievable) radiation protection principle.
Materials and methods
...Two Hologic Selenia mammography units were studied: one with a molybdenum anode and the other with a tungsten anode. After optimisation of the image production chain, we evaluated doses in a phantom under standard conditions. In this phase, we exposed a polymethyl-methacrylate (PMMA) phantom to the two mammography units and recorded the exposure parameters used by them. The phantom was subsequently replaced by a dedicated Radcal ionisation chamber on which preliminary dose assessments were conducted. Image quality of the two systems was compared by exposing a phantom containing geometrical inserts and setting the exposure parameters used for the dose assessments on each mammography unit. Dosimetric assessments of exposure data were recorded from the mammographic examinations of approximately 400 women (1,600 exposures).
Results and Conclusions
The unit with the tungsten anode achieved a lower patient dose. As a result, the Selenia-W device was allocated to the breast screening programme.
GigaFitter: Performance at CDF and perspective for future applications Amerio, S.; Annovi, A.; Basile, M. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
11/2010, Letnik:
623, Številka:
1
Journal Article
Recenzirano
The Silicon Vertex Trigger (SVT) at CDF is made of two pipelined processors: the Associative Memory finding low precision tracks and the Track Fitter refining the track quality with high-precision ...fits. We will describe the performance of a next generation track fitter, the GigaFitter, that performs more than a fit per nanosecond. It is going to be inserted parasitically in SVT to study its capabilities to improve data taking during the high luminosity CDF runs. This device is based on modern FPGA technology, rich of powerful DSP arrays, to reduce the track parameter reconstruction to few clock cycles and perform many fits in parallel. The goal of the design was to reduce significantly the processing time required for fitting and thus allow more time for the subsequent high resolution track-fitting. Preliminary results on the algorithm latency are presented. A future more power-full version of the GigaFitter intended for LHC experiments is also discussed.
Triggering on B-Jets at CDF II Amerio, S.; Casarsa, M.; Cortiana, G. ...
IEEE transactions on nuclear science,
06/2009, Letnik:
56, Številka:
3
Journal Article
Recenzirano
Odprti dostop
In this paper we present a trigger algorithm able to select online events enriched of b-jets. This feature is of central interest in order to extend the physics reach for standard model and minimal ...super symmetric model Higgs decaying into a pair of b -quarks. The algorithm fully exploits the recently upgraded CDFII tracking system and Level 2 CALorimeter cluster finder. These upgrades are necessary to cope with Tevatron increasing luminosity and provide new and refined trigger primitives that are the key elements of our algorithm together with the already existing silicon vertex trigger. A b -hadron can travel some millimeters before decaying and the trigger algorithm exploits this characteristic by searching for tracks displaced with respect to the primary vertex and matched to energetic jets of particles. We discuss the study and the optimization of the algorithm, its technical implementation as well as its performance. The new trigger provides an efficient selection for Higgs decaying into a pair of b -quarks and runs up to high luminosity with an acceptable occupancy of the available bandwidth.