Computing plays a significant role in all areas of high energy physics. The Snowmass 2021 CompF4 topical group's scope is facilities R&D, where we consider "facilities" as the computing hardware and ...software infrastructure inside the data centers plus the networking between data centers, irrespective of who owns them, and what policies are applied for using them. In other words, it includes commercial clouds, federally funded High Performance Computing (HPC) systems for all of science, and systems funded explicitly for a given experimental or theoretical program. This topical group report summarizes the findings and recommendations for the storage, processing, networking and associated software service infrastructures for future high energy physics research, based on the discussions organized through the Snowmass 2021 community study.
Using Xrootd to Federate Regional Storage Bauerdick, L; Benjamin, D; Bloom, K ...
Journal of physics. Conference series,
01/2012, Letnik:
396, Številka:
4
Journal Article
Recenzirano
Odprti dostop
While the LHC data movement systems have demonstrated the ability to move data at the necessary throughput, we have identified two weaknesses: the latency for physicists to access data and the ...complexity of the tools involved. To address these, both ATLAS and CMS have begun to federate regional storage systems using Xrootd. Xrootd, referring to a protocol and implementation, allows us to provide data access to all disk-resident data from a single virtual endpoint. This “redirector” discovers the actual location of the data and redirects the client to the appropriate site. The approach is particularly advantageous since typically the redirection requires much less than 500 milliseconds and the Xrootd client is conveniently built into LHC physicists’ analysis tools. Currently, there are three regional storage federations - a US ATLAS region, a European CMS region, and a US CMS region. The US ATLAS and US CMS regions include their respective Tier 1, Tier 2 and some Tier 3 facilities; a large percentage of experimental data is available via the federation. Additionally, US ATLAS has begun studying low-latency regional federations of close-by sites. From the base idea of federating storage behind an endpoint, the implementations and use cases diverge. The CMS software framework is capable of efficiently processing data over high-latency links, so using the remote site directly is comparable to accessing local data. The ATLAS processing model allows a broad spectrum of user applications with varying degrees of performance with regard to latency; a particular focus has been optimizing n-tuple analysis. Both VOs use GSI security. ATLAS has developed a mapping of VOMS roles to specific file system authorizations, while CMS has developed callouts to the site's mapping service. Each federation presents a global namespace to users. For ATLAS, the global-to-local mapping is based on a heuristic-based lookup from the site's local file catalog, while CMS does the mapping based on translations given in a configuration file. We will also cover the latest usage statistics and interesting use cases that have developed over the previous 18 months.
Solid State Drives (SSD) is a promising storage technology for High Energy Physics parallel analysis farms. Its combination of low random access time and relatively high read speed is very well ...suited for situations where multiple jobs concurrently access data located on the same drive. It also has lower energy consumption and higher vibration tolerance than Hard Disk Drive (HDD) which makes it an attractive choice in many applications raging from personal laptops to large analysis farms. The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF is especially efficient together with distributed local storage systems like Xrootd, when data are distributed over computing nodes. In such an architecture the local disk subsystem I/O performance becomes a critical factor, especially when computing nodes use multi-core CPUs. We will discuss our experience with SSDs in PROOF environment. We will compare performance of HDD with SSD in I/O intensive analysis scenarios. In particular we will discuss PROOF system performance scaling with a number of simultaneously running analysis jobs.
The Parallel ROOT Facility – PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with ...centralized storage systems, but it is especially effective together with distributed local storage systems – like Xrootd, when data are distributed over computing nodes. It works efficiently on different types of hardware and scales well from a multi-core laptop to large computing farms. From that point of view it is well suited for both large central analysis facilities and Tier 3 type analysis farms. PROOF can be used in interactive or batch like regimes. The interactive regime allows the user to work with typically distributed data from the ROOT command prompt and get a real time feedback on analysis progress and intermediate results. We will discuss our experience with PROOF in the context of ATLAS Collaboration distributed analysis. In particular we will discuss PROOF performance in various analysis scenarios and in multi-user, multi-session environments. We will also describe PROOF integration with the ATLAS distributed data management system and prospects of running PROOF on geographically distributed analysis farms.
The anomalous magnetic moment of the negative muon has been measured to a precision of 0.7 ppm (ppm) at the Brookhaven Alternating Gradient Synchrotron. This result is based on data collected in ...2001, and is over an order of magnitude more precise than the previous measurement for the negative muon. The result a(mu(-))=11 659 214(8)(3) x 10(-10) (0.7 ppm), where the first uncertainty is statistical and the second is systematic, is consistent with previous measurements of the anomaly for the positive and the negative muon. The average of the measurements of the muon anomaly is a(mu)(exp)=11 659 208(6) x 10(-10) (0.5 ppm).
Electromagnetic calorimeters for the BNL muon ( g−2) experiment Sedykh, S.A; Blackburn, J.R; Bunker, B.D ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
12/2000, Letnik:
455, Številka:
2
Journal Article
Recenzirano
A set of 24 lead/scintillating fiber electromagnetic calorimeters has been constructed for the new muon (
g−2) experiment at the Brookhaven AGS. These calorimeters were designed to provide very good ...energy resolution for electrons up to 3 GeV while also yielding excellent timing information. Special requirements in the experiment related to the uniformity of response, the short-term gain and timing stability, and the neutron background led to several unusual design features. The calorimeters were tested and calibrated with electrons in the energy range 0.5–4.0 GeV and have been installed and used in the muon storage ring. The design criteria, construction, and performance of the system are described.
Improved limit on the muon electric dipole moment Bennett, G. W.; Bousquet, B.; Brown, H. N. ...
Physical review. D, Particles, fields, gravitation, and cosmology,
09/2009, Letnik:
80, Številka:
5
Journal Article