Review of CERN Data Centre Infrastructure Andrade, P; Bell, T; van Eldik, J ...
Journal of physics. Conference series,
01/2012, Letnik:
396, Številka:
4
Journal Article
Recenzirano
Odprti dostop
The CERN Data Centre is reviewing strategies for optimizing the use of the existing infrastructure and expanding to a new data centre by studying how other large sites are being operated. Over the ...past six months, CERN has been investigating modern and widely-used tools and procedures used for virtualisation, clouds and fabric management in order to reduce operational effort, increase agility and support unattended remote data centres. This paper gives the details on the project's motivations, current status and areas for future investigation.
Future Approach to tier-0 extension Jones, B; McCance, G; Cordeiro, C ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
8
Journal Article
Recenzirano
Odprti dostop
The current tier-0 processing at CERN is done on two managed sites, the CERN computer centre and the Wigner computer centre. With the proliferation of public cloud resources at increasingly ...competitive prices, we have been investigating how to transparently increase our compute capacity to include these providers. The approach taken has been to integrate these resources using our existing deployment and computer management tools and to provide them in a way that exposes them to users as part of the same site. The paper will describe the architecture, the toolset and the current production experiences of this model.
CERN Computing in Commercial Clouds Cordeiro, C; Field, L; Garrido Bear, B ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
8
Journal Article
Recenzirano
Odprti dostop
By the end of 2016 more than 10 Million core-hours of computing resources have been delivered by several commercial cloud providers to the four LHC experiments to run their production workloads, from ...simulation to full chain processing. In this paper we describe the experience gained at CERN in procuring and exploiting commercial cloud resources for the computing needs of the LHC experiments. The mechanisms used for provisioning, monitoring, accounting, alarming and benchmarking will be discussed, as well as the involvement of the LHC collaborations in terms of managing the workflows of the experiments within a multicloud environment.
Scaling Agile Infrastructure to People Jones, B; McCance, G; Traylen, S ...
Journal of physics. Conference series,
01/2015, Letnik:
664, Številka:
2
Journal Article
Recenzirano
Odprti dostop
When CERN migrated its infrastructure away from homegrown fabric management tools to emerging industry-standard open-source solutions, the immediate technical challenges and motivation were clear. ...The move to a multi-site Cloud Computing model meant that the tool chains that were growing around this ecosystem would be a good choice, the challenge was to leverage them. The use of open-source tools brings challenges other than merely how to deploy them. Homegrown software, for all the deficiencies identified at the outset of the project, has the benefit of growing with the organization. This paper will examine what challenges there were in adapting open-source tools to the needs of the organization, particularly in the areas of multi-group development and security. Additionally, the increase in scale of the plant required changes to how Change Management was organized and managed. Continuous Integration techniques are used in order to manage the rate of change across multiple groups, and the tools and workflow for this will be examined.
The Grid community uses two well-established registration services, which allow users to be authenticated under the auspices of Virtual Organizations (VOs). The Virtual Organization Membership ...Service (VOMS), developed in the context of the Enabling Grid for E-sciencE (EGEE) project, is an Attribute Authority service that issues attributes expressing membership information of a subject within a VO. VOMS allows to partition users in groups, assign them roles and free-form attributes which are then used to drive authorization decisions. The VOMS administrative application, VOMS-Admin, manages and populates the VOMS database with membership information. The Virtual Organization Management Registration Service (VOMRS), developed at Fermilab, extends the basic registration and management functionalities present in VOMS-Admin. It implements a registration workflow that requires VO usage policy acceptance and membership approval by administrators. VOMRS supports management of multiple grid certificates, and handling users' request for group and role assignments, and membership status. VOMRS is capable of interfacing to local systems with personnel information (e.g. the CERN Human Resource Database) and of pulling relevant member information from them. VOMRS synchronizes the relevant subset of information with VOMS. The recent development of new features in VOMS-Admin raises the possibility of rationalizing the support and converging on a single solution by continuing and extending existing collaborations between EGEE and OSG. Such strategy is supported by WLCG, OSG, US CMS, US Atlas, and other stakeholders worldwide. In this paper, we will analyze features in use by major experiments and the use cases for registration addressed by the mature single solution.
High-energy physics experiments, such as the compact muon solenoid (CMS) at the large hadron collider (LHC), have large-scale data processing computing requirements. The grid has been chosen as the ...solution. One important challenge when using the grid for large-scale data processing is the ability to monitor the large numbers of jobs that are being executed simultaneously at multiple remote sites. The relational grid monitoring architecture (R-GMA) is a monitoring and information management service for distributed resources based on the GMA of the Global Grid Forum. We report on the first measurements of R-GMA as part of a monitoring architecture to be used for batch submission of multiple Monte Carlo simulation jobs running on a CMS-specific LHC computing grid test bed. Monitoring information was transferred in real time from remote execution nodes back to the submitting host and stored in a database. In scalability tests, the job submission rates supported by successive releases of R-GMA improved significantly, approaching that expected in full-scale production.
To better understand the influence of Rivera plate kinematics on the geodynamic evolution of western Mexico, we use more than 1400 crossings of seafloor spreading magnetic lineations along the ...Pacific–Rivera rise and northern Mathematician ridge to solve for rotations of the Rivera plate relative to the underlying mantle and the Pacific and North American plates at 14 times since 9.9
Ma. Our comparison of magnetic anomaly crossings from the undeformed Pacific plate to their counterparts on the Rivera plate indicates that significant areas of the Rivera plate have deformed since 9.9
Ma. Dextral shear along the southern edge of the plate from 3.3–2.2
Ma during a regional plate boundary reorganization deformed the Rivera plate farther into its interior than previously recognized. In addition, seafloor located north of two rupture zones within the Rivera plate sutured to North America after 1.5
Ma. Anomaly crossings from these two deformed regions thus cannot be used to reconstruct motion of the Rivera plate. Finite rotations that best reconstruct Pacific plate anomaly crossings onto their undeformed counterparts on the Rivera plate yield stage spreading rates that decrease gradually by 10% between 10 and 3.6
Ma, decrease rapidly by 20% after ∼3.6
Ma, and recover after 1
Ma. The slowdown in Pacific–Rivera seafloor spreading at 3.6
Ma coincided with the onset of dextral shear across the then-incipient southern boundary of the Rivera plate with the Pacific plate. The available evidence indicates that the Rivera plate has been an independent microplate since at least 10
Ma, contrary to published assertions that it fragmented from the Cocos plate at ∼5
Ma. Motion of the Rivera plate relative to North America has changed significantly since 10
Ma, in concert with significant changes in Pacific–Rivera motion. A significant and robust feature of Rivera–North America motion not previously recognized is the cessation of margin-normal convergence and thus subduction from 2.6 to 1.0
Ma along the entire plate boundary, followed by a resumption of trench-normal subduction along the southern half of the Rivera–North America plate boundary after 1.0
Ma. Motion of the Rivera plate relative to the underlying mantle since 10
Ma has oscillated between periods of landward motion and seaward motion. The evidence suggests that the torque exerted by slab pull on this young and hot oceanic plate is either minimal or is effectively counterbalanced by forces that resist its motion.
It is four years now since the first prototypes of tools and tests started to monitor the Worldwide LHC Computing Grid (WLCG) services. One of these tools is the Service Availability Monitoring (SAM) ...framework, which superseded the SFT tool, and has become a keystone for the monthly WLCG availability and reliability computations. During this time, the grid has evolved into a robust, production-level infrastructure, in no small part thanks to the extensive monitoring infrastructure which includes testing, visualization and reporting. Experience gained with monitoring has led to emerging grid monitoring standards, and provided valuable input for the Operations Automation Strategy aimed at the regionalization of monitoring services. This change in scope, together with an ever-increasing number of services and infrastructures, make enhancements in the architecture of existing monitoring tools a necessity. This paper describes the present architecture of SAM, an enhanced and distributed model for monitoring WLCG services, and the required changes in SAM to adopt this new model inside the EGEE-III project.