In this work we present the architectural and performance studies concerning a prototype of a distributed Tier2 infrastructure for HEP, instantiated between the two Italian sites of INFN-Romal and ...INFN-Napoli. The network infrastructure is based on a Layer-2 geographical link, provided by the Italian NREN (GARR), directly connecting the two remote LANs of the named sites. By exploiting the possibilities offered by the new distributed file systems, a shared storage area with synchronous copy has been set up. The computing infrastructure, based on an OpenStack facility, is using a set of distributed Hypervisors installed in both sites. The main parameter to be taken into account when managing two remote sites with a single framework is the effect of the latency, due to the distance and the end-to-end service overhead. In order to understand the capabilities and limits of our setup, the impact of latency has been investigated by means of a set of stress tests, including data I/O throughput, metadata access performance evaluation and network occupancy, during the life cycle of a Virtual Machine. A set of resilience tests has also been performed, in order to verify the stability of the system on the event of hardware or software faults. The results of this work show that the reliability and robustness of the chosen architecture are effective enough to build a production system and to provide common services. This prototype can also be extended to multiple sites with small changes of the network topology, thus creating a National Network of Cloud-based distributed services, in HA over WAN.
In 2012, 14 Italian institutions participating in LHC Experiments won a grant from the Italian Ministry of Research (MIUR), with the aim of optimising analysis activities, and in general the Tier2 ...Tier3 infrastructure. We report on the activities being researched upon, on the considerable improvement in the ease of access to resources by physicists, also those with no specific computing interests. We focused on items like distributed storage federations, access to batch-like facilities, provisioning of user interfaces on demand and cloud systems. R&D on next-generation databases, distributed analysis interfaces, and new computing architectures was also carried on. The project, ending in the first months of 2016, will produce a white paper with recommendations on best practices for data-analysis support by computing centers.
In modern Data Grid infrastructures, we increasingly face the problem of providing the running applications with fast and reliable access to large data volumes, often geographically distributed ...across the network. As a direct consequence, the concept of replication has been adopted by the grid community to increase data availability and maximize job throughput. To be really effective, such process has to be driven by specific optimization strategies that define when and where replicas should be created or deleted on a per-site basis, and which replicas a job should use. These strategies have to take into account the available network bandwidth as a primary resource, prior to any consideration about storage or processing power. We present a novel replica management service, integrated within the Gluedomains active network monitoring architecture, designed and implemented within the centralized collective middleware framework of the SCoPE project to provide network-aware transfer services for data intensive Grid applications.
The large amount of data produced by the ATLAS experiment needs new computing paradigms for data processing and analysis, which involve many computing centres spread around the world. The computing ...workload is managed by regional federations, called “clouds”. The Italian cloud consists of a main (Tier-1) center, located in Bologna, four secondary (Tier-2) centers, and a few smaller (Tier-3) sites. In this contribution we describe the Italian cloud facilities and the activities of data processing, analysis, simulation and software development performed within the cloud, and we discuss the tests of the new computing technologies contributing to evolution of the ATLAS Computing Model.
ATLAS data are distributed centrally to Tier-1 and Tier-2 sites. The first stages of data selection and analysis take place mainly at Tier-2 centres, with the final, iterative and interactive, stages ...taking place mostly at Tier-3 clusters. The Italian ATLAS cloud consists of a Tier-1, four Tier-2s, and Tier-3 sites at each institute. Tier-3s that are grid-enabled are used to test code that will then be run on a larger scale at Tier-2s. All Tier-3s offer interactive data access to their users and the possibility to run PROOF. This paper describes the hardware and software infrastructure choices taken, the operational experience after 10 months of LHC data, and discusses site performances.
Muon identification (MUID) and high-momentum measurement accuracy is crucial to fully exploit the physics potential that will be accessible with the ATLAS experiment at the LHC. The muon energy of ...physics interest ranges in a large interval from few GeV, where the b-physics studies dominate the physics program, up to the highest values that could indicate the presence of new physics. The muon detection system of the ATLAS detector is characterized by two high-precision tracking systems, namely the inner detector (ID) and the muon spectrometer, (MS) plus a thick calorimeter that ensures a safe hadron absorption filtering with high-purity muons with energy above 3 GeV. In order to combine the muon tracks reconstructed in the ID and the MS, a MUID object-oriented software package has been developed. The purpose of the MUID procedure is to associate tracks found in the MS with the corresponding ID track and calorimeter information in order to identify muons at their production vertex with optimum parameter resolution. The performance of these two combined systems has been evaluated with Monte Carlo studies using single muons of fixed-transverse momentum and with full physics events.
Background: Environmental ionizing radiation has been associated with increased cancer risk by several studies. The Brazilian city of Poços de Caldas, MG, seats on a huge deposit of uranium, which ...was until recently mined. We performed a retrospective analysis of 310 cases of patients with breast cancer, who were exposed for at least ten years to different levels of ionizing radiation around their homes, to verify whether a correlation existed between disease incidence, prevalence, and exposure. Materials and Methods: Gamma radiation was measured on the roads and the urban street grid. We retrieved the clinical tiles of 310 patients from the Population-Based Cancer Registry of Poços de Caldas city, MG, Brazil and compared the local prevalence and incidence of breast cancer per city district to the local effective doses. Results: Effective doses of radiation around patients' homes varied from 0.72 and 1.30 mSv/year, with 70% of the homes exposed to doses > 1.0 mSv/year. When considered the number of cases in the study in relation to the adult female population of the city, the incidence of female breast cancer was 25.9% higher than the national average incidence for the same period, 2003-2011 (68.32/100,000 versus 50.61/100,000 respectively). Conclusion: The higher incidence of breast cancer among the adult female population of Poços de Caldas may be associated with chronic exposure for ten or more years to effective doses equal or slightly above the international reference dose of 1.0 mSv/year. Other known risk factors for breast cancer in our patients were not different from those found nationwide.
The COTS software obsolescence threat Merola, L.
Fifth International Conference on Commercial-off-the-Shelf (COTS)-Based Software Systems (ICCBSS'05),
2006
Conference Proceeding
Software is the primary focus of integration efforts for development of open architected, scalable, adaptable solutions in today's defense systems of systems. Unfortunately, successful software ...vendors obsolete their own product versions to maintain the pace with the market, without regard for the military need for continued support or expandability. Recognized by many professionals as being of equal gravity as the hardware obsolescence issue, software obsolescence has to-date not enjoyed the same level of visibility. This paper reveals the obsolescence problem in development, integration, test, production, and program management environments; a different perspective compared to the typical focus on obsolescence risk management and mitigation in the end-user, operational environment. Despite the portfolio of methods implemented for the effective management of COTS hardware obsolescence on a growing number of military programs, the software obsolescence problem is not being managed or mitigated. Could software obsolescence become more overwhelming than the hardware obsolescence dilemma?.