The CMS (Compact Muon Solenoid) experiment is one of two large general-purpose particle physics detectors at the LHC (Large Hadron Collider). An international collaboration of nearly 3500 people ...operates this complex detector whose main goal is to answer the most fundamental questions about our universe. The size and globally diversified nature of the collaboration and the Petabytes/year of data being collected, presents a big challenging task in bringing users up to speed to contribute to the physics analysis. The CMS User Support performs this task by helping users quickly learn about the CMS computing and the needed physics analysis tools. In this presentation we give an overview of its goals, organization and usage of collaborative tools to maintain the software and computing documentation and conduct year around tutorials on several physics tools needed as a pre-requisite for physics. We also talk about the user feedback evaluating its work.
The increase of luminosity of the LHC in 2011 also introduced an increase of computing requirements for data processing. This paper describes the data processing operations during 2011 prompt ...reconstruction as well as the end of year re-processing of the full data sample. It further gives an outlook to next evolutionary steps in the LHCb computing model for 2012 data processing and beyond.
A low-profile antenna capable of generating two coaxially propagating orbital angular momentum (OAM) modes is presented. It consists of two series-fed traveling-wave uniform circular arrays of ...sequentially rotated radiating slots and operates at 10 190 MHz. The smaller and larger radius arrays, with 4 and 8 radiating elements, produce OAM modes with indices l OAM = 0 and +1, respectively. They are designed using a unique simulation-based step-by-step technique and implemented on a single layer of the substrate-integrated waveguide, resulting in a simple structure. The antenna's performance is evaluated using simulations and measurements, with a high agreement between the results. Both modes work in left-hand circular polarization, with the first mode having a gain and axial ratio measured bandwidth of about 5.9 and 3.9%, respectively. In addition, the measured impedance bandwidth for the first and second modes is around 8.2 and 7.9%, correspondingly. Also, the orthogonality bandwidth of two identical antennas set 28 cm apart in front of each other is roughly 4.5%. The antenna's far-field and near-field radiation patterns are also investigated.
The Worldwide LHC Computing Grid (WLCG) is an innovative distributed environment which is deployed through the use of grid computing technologiesin order to provide computing and storage resources to ...the LHC experimentsfor data processing and physics analysis. Following increasing demands of LHC computing needs toward high luminosity era, the experiments are engagdin an ambitious program to extend the capability of WLCG distributed environment, for instance including opportunistically used resources such as High-Performance Computers (HPCs), cloud platforms and volunteer computer. norder to be effectively used by the LHC experiments, all these diverse distributed resources should be described in detail. This implies easy service discovery of shared physical resources, detailed description of service configurations and experiment-specific data structures is needed. In this contribution, we present a high-level information component of a distributed computing environment, the Computing Resource Information Catalogue (CRIC) which aims to facilitate distributed computing operations for the LHC experiments and consolidate WLCG topology information. In addition, CRIC performs data validation and provides coherent view and topology descriptinto the LHC VOs for service discovery and configuration. CRIC represents teevolution of ATLAS Grid Information System (AGIS) into the common experiment independent high-level information framework. CRIC’s mission is to serve not just ATLAS Collaboration needs for the description of the distributed environment but any other virtual organization relying on large scale distributed infrastructure as well as the WLCG on the global scope. The contribution describes CRIC architecture, implementation of data model,collectors, user interfaces, advanced authentication and access control components of the system.
Since the beginning of the WLCG Project the Spanish ATLAS computing centers have participated with reliable and stable resources as well as personnel for the ATLAS Collaboration. Our contribution to ...the ATLAS Tier2s and Tier1s computing resources (disk and CPUs) in the last 10 years has been around 4-5%. In 2016 an international advisory committee recommended to revise our contribution according to the participation in the ATLAS experiment. With this scenario, we are optimizing the federation of three sites located in Barcelona, Madrid and Valencia, considering that the ATLAS collaboration has developed workflows and tools to flexibly use all the resources available to the collaboration, where the tiered structure is somehow vanishing. In this contribution, we would like to show the evolution and technical updates in the ATLAS Spanish Federated Tier2 and Tier1. Some developments we are involved in, like the Event Index project, as well as the use of opportunistic resources will be useful to reach our goal. We discuss the foreseen/proposed scenario towards a sustainable computing environment for the Spanish ATLAS community in the HL-LHC period.
The ATLAS Spanish Tier-1 and Tier-2s have more than 15 years of experience in the deployment and development of LHC computing components and their successful operations. The sites are already ...actively participating in, and even coordinating, emerging R&D computing activities and developing new computing models needed for the Run3 and HighLuminosity LHC periods. In this contribution, we present details on the integration of new components, such as High Performance Computing resources to execute ATLAS simulation workflows. The development of new techniques to improve efficiency in a cost-effective way, such as storage and CPU federations is shown in this document. Improvements in data organization, management and access through storage consolidations (“data-lakes”), the use of data caches, and improving experiment data catalogs, like Event Index, are explained in this proceeding. The design and deployment of new analysis facilities using GPUs together with CPUs and techniques like Machine Learning will also be presented. Tier-1 and Tier-2 sites, are, and will be, contributing to significant R&D in computing, evaluating different models for improving performance of computing and data storage capacity in the High-Luminosity LHC era.