Distributed e-Infrastructure is a key component of modern BIG Science. Service discovery in e-Science environments, such as Worldwide LHC Computing Grid (WLCG), is a crucial functionality that relies ...on service registry. In this paper we re-formulate the requirements for the service endpoint registry based on our more than 10 years experience with many systems designed or used within the WLCG e-Infrastructure. To satisfy those requirements the paper proposes a novel idea to use the existing well-established Domain Name System (DNS) infrastructure together with a suitable data model as a service endpoint registry. The presented ARC Hierarchical Endpoints Registry (ARCHERY) system consists of a minimalistic data model representing services and their endpoints within e-Infrastructures, a rendering of the data model embedded into DNS-records, a lightweight software layer for DNS-record management and client-side data discovery. Our approach for the ARCHERY registry required minimal software development and inherits all the benefits of one of the most reliable distributed information discovery source of the internet, the DNS infrastructure. In particular, deployment, management and operation of ARCHERY is fully relying on DNS. Results of ARCHERY deployment use-cases are provided together with performance analysis.
The Worldwide LHC Computing Grid (WLCG) is today comprised of a range of different types of resources such as cloud centers, large and small HPC centers, volunteer computing as well as the ...traditional grid resources. The Nordic Tier 1 (NT1) is a WLCG computing infrastructure distributed over the Nordic countries. The NT1 deploys the Nordugrid ARC-CE, which is non-intrusive and lightweight, originally developed to cater for HPC centers where no middleware could be installed on the worker nodes. The NT1 runs ARC in the native Nordugrid mode which contrary to the Pilot mode leaves jobs data transfers up to ARC. ARCs data transfer capabilities together with the ARC Cache are the most important features of ARC.
In this article we will describe the datastaging and cache functionality of the ARC-CE set up as an edge service to an HPC or cloud resource, and show the gain in efficiency this model provides compared to a traditional pilot model, especially for sites with remote storage.
Evolution of the WLCG Information Infrastructure Andreeva, Julia; Anisenkov, Alexey; Di Girolamo, Alessandro ...
EPJ Web of Conferences,
2020, Letnik:
245
Journal Article, Conference Proceeding
Recenzirano
Odprti dostop
The WLCG project aimed to develop, build, and maintain a global computing facility for storage and analysis of the LHC data. While currently most of the LHC computing resources are being provided by ...the classical grid sites, over the last years the LHC experiments have been using more and more public clouds and HPCs, and this trend will certainly continue. The heterogeneity of the LHC computing resources is not limited to the procurement mode. It also implies variety of storage solutions and types of computer architecture which represent new challenges for the topology and configuration description of the LHC computing resources. The WLCG Information infrastructure has to evolve in order to meet these challenges and to be flexible enough to follow technology innovation. It should provide a complete and reliable description of all types of the storage and computing resources to ensure their effective use. This implies changes at all levels, starting from the primary information providers, through data publishing, transportation mechanism and central aggregators. This paper describes the proposed changes in the WLCG Information Infrastructure, their implementation and deployment.
Building a Distributed Computing System for LDMX Bryngemark, Lene Kristian; Cameron, David; Dutta, Valentina ...
EPJ Web of Conferences,
01/2021, Letnik:
251
Conference Proceeding, Journal Article
Recenzirano
Odprti dostop
Particle physics experiments rely extensively on computing and data services, making e-infrastructure an integral part of the research collaboration. Constructing and operating distributed computing ...can however be challenging for a smaller-scale collaboration. The Light Dark Matter eXperiment (LDMX) is a planned small-scale accelerator-based experiment to search for dark matter in the sub-GeV mass region. Finalizing the design of the detector relies on Monte-Carlo simulation of expected physics processes. A distributed computing pilot project was proposed to better utilize available resources at the collaborating institutes, and to improve scalability and reproducibility. This paper outlines the chosen lightweight distributed solution, presenting requirements, the component integration steps, and the experiences using a pilot system for tests with large-scale simulations. The system leverages existing technologies wherever possible, minimizing the need for software development, and deploys only non-intrusive components at the participating sites. The pilot proved that integrating existing components can dramatically reduce the effort needed to build and operate a distributed e-infrastructure, making it attainable even for smaller research collaborations.
The next-generation ARC middleware Appleton, O.; Cameron, D.; Cernak, J. ...
Annales des télécommunications,
2010, Letnik:
65, Številka:
11-12
Journal Article
Recenzirano
Odprti dostop
The Advanced Resource Connector (ARC) is a light-weight, non-intrusive, simple yet powerful Grid middleware capable of connecting highly heterogeneous computing and storage resources. ARC aims at ...providing general purpose, flexible, collaborative computing environments suitable for a range of uses, both in science and business. The server side offers the fundamental job execution management, information and data capabilities required for a Grid. Users are provided with an easy to install and use client which provides a basic toolbox for job- and data management. The KnowARC project developed the next-generation ARC middleware, implemented as Web Services with the aim of standard-compliant interoperability.
As computational Grids move away from the prototyping state, reliability, performance and ease of use and maintenance become focus areas of their adoption. In this paper, we describe ARC (Advanced ...Resource Connector) Grid middleware, where these issues have been given special consideration.
We present an in-depth view of the existing components of ARC, and discuss some of the new components, functionalities and enhancements currently under development. This paper also describes architectural and technical choices that have been made to ensure scalability, stability and high performance. The core components of ARC have already been thoroughly tested in demanding production environments, where it has been in use since 2002. The main goal of this paper is to provide a first comprehensive description of ARC.
A search for heavy resonances decaying into a pair of Z bosons leading to ℓ+ℓ−ℓ′+ℓ′− and ℓ+ℓ−νν¯ final states, where ℓ stands for either an electron or a muon, is presented. The search uses ...proton–proton collision data at a centre-of-mass energy of 13 TeV collected from 2015 to 2018 that corresponds to the integrated luminosity of 139 fb−1 recorded by the ATLAS detector during Run 2 of the Large Hadron Collider. Different mass ranges spanning 200 GeV to 2000 GeV for the hypothetical resonances are considered, depending on the final state and model. In the absence of a significant observed excess, the results are interpreted as upper limits on the production cross section of a spin-0 or spin-2 resonance. The upper limits for the spin-0 resonance are translated to exclusion contours in the context of Type-I and Type-II two-Higgs-doublet models, and the limits for the spin-2 resonance are used to constrain the Randall–Sundrum model with an extra dimension giving rise to spin-2 graviton excitations.
Particle physics experiments rely extensively on computing and data services, making e-infrastructure an integral part of the research collaboration. Constructing and operating distributed computing ...can however be challenging for a smaller-scale collaboration.
The Light Dark Matter eXperiment (LDMX) is a planned small-scale accelerator-based experiment to search for dark matter in the sub-GeV mass region. Finalizing the design of the detector relies on Monte-Carlo simulation of expected physics processes. A distributed computing pilot project was proposed to better utilize available resources at the collaborating institutes, and to improve scalability and reproducibility.
This paper outlines the chosen lightweight distributed solution, presenting requirements, the component integration steps, and the experiences using a pilot system for tests with large-scale simulations. The system leverages existing technologies wherever possible, minimizing the need for software development, and deploys only non-intrusive components at the participating sites. The pilot proved that integrating existing components can dramatically reduce the effort needed to build and operate a distributed e-infrastructure, making it attainable even for smaller research collaborations.