ATLAS fast physics monitoring: TADA Sabato, G; Elsing, M; Gumpert, C ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
9
Journal Article
Recenzirano
Odprti dostop
The ATLAS experiment at the LHC has been recording data from proton-proton collisions with 13 TeV center-of-mass energy since spring 2015. The collaboration is using a fast physics monitoring ...framework (TADA) to automatically perform a broad range of fast searches for early signs of new physics and to monitor the data quality across the year with the full analysis level calibrations applied to the rapidly growing data. TADA is designed to provide fast feedback directly after the collected data has been fully calibrated and processed at the Tier-0. The system can monitor a large range of physics channels, offline data quality and physics performance quantities. TADA output is available on a website accessible by the whole collaboration. It gets updated twice a day with the data from newly processed runs. Hints of potentially interesting physics signals or performance issues identified in this way are reported to be followed up by physics or combined performance groups. The note reports as well about the technical aspects of TADA: the software structure to obtain the input TAG files, the framework workflow and structure, the webpage and its implementation.
During the 2013-2014 shutdown of the Large Hadron Collider, ATLAS switched to a new event data model for analysis, called the xAOD. A key feature of this model is the separation of the object data ...from the objects themselves (the 'auxiliary store'). Rather than being stored as member variables of the analysis classes, all object data are stored separately, as vectors of simple values. Thus, the data are stored in a 'structure of arrays' format, while the user still can access it as an 'array of structures'. This organization allows for on-demand partial reading of objects, the selective removal of object properties, and the addition of arbitrary user- defined properties in a uniform manner. It also improves performance by increasing the locality of memory references in typical analysis code. The resulting data structures can be written to ROOT files with data properties represented as simple ROOT tree branches. This paper focuses on the design and implementation of the auxiliary store and its interaction with ROOT.
These proceedings give a summary of the many software upgrade projects undertaken to prepare ATLAS for the challenges of Run-2 of the LHC. Those projects include a significant reduction of the CPU ...time required for reconstruction of real data with high average pile-up event rates compared to 2012. This is required to meet the challenges of the expected increase in pileup and the higher data taking rate of up to 1 kHz. By far the most ambitious project is the implementation of a completely new analysis model, based on a new ROOT readable reconstruction format, xAOD. The new model also includes a reduction framework based on a train model to centrally produce skimmed data samples and an analysis framework. These proceedings close with a brief overview of future software projects and plans that will lead up to the coming Long Shutdown 2 as the next major ATLAS software upgrade phase.
The new ATLAS track reconstruction (NEWT) Cornelissen, T; Elsing, M; Gavrilenko, I ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The track reconstruction of modern high energy physics experiments is a very complex task that puts stringent requirements onto the software realisation. The ATLAS track reconstruction software has ...been in the past dominated by a collection of individual packages, each of which incorporating a different intrinsic event data model, different data flow sequences and calibration data. Recently, the ATLAS track reconstruction has undergone a major design revolution to ensure maintainability during the long lifetime of the ATLAS experiment and the flexibility needed for the startup phase. The entire software chain has been re-organised in modular components and a common event data model has been deployed. A complete new track reconstruction that concentrates on common tools aimed to be used by both ATLAS tracking devices, the Inner Detector and the Muon System, has been established. It has been already used during many large scale tests with data from Monte Carlo simulation and from detector commissioning projects such as the combined test beam 2004 and cosmic ray events. This document concentrates on the technical and conceptual details of the newly developed track reconstruction.
FCC Physics Opportunities Altmannshofer, W.; Arsenyev, S. A.; Aune, S. ...
The European physical journal. C, Particles and fields,
2019, Letnik:
79, Številka:
6
Journal Article, Publication
Recenzirano
Odprti dostop
We review the physics opportunities of the Future Circular Collider, covering its e
+
e
-
, pp, ep and heavy ion programmes. We describe the measurement capabilities of each FCC component, addressing ...the study of electroweak, Higgs and strong interactions, the top quark and flavour, as well as phenomena beyond the Standard Model. We highlight the synergy and complementarity of the different colliders, which will contribute to a uniquely coherent and ambitious research programme, providing an unmatchable combination of precision and sensitivity to new physics.
The DELPHI Silicon Tracker in the global pattern recognition Elsing, M.
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
06/2000, Letnik:
447, Številka:
1
Journal Article
Recenzirano
Odprti dostop
ALEPH and DELPHI were the first experiments operating a silicon vertex detector at LEP. During the past 10 years of data taking the DELPHI Silicon Tracker was upgraded three times to follow the ...different tracking requirements for LEP 1 and LEP 2 as well as to improve the tracking performance. Several steps in the development of the pattern recognition software were done in order to understand and fully exploit the silicon tracker information. This article gives an overview of the final algorithms and concepts of the track reconstruction using the Silicon Tracker in DELPHI.
The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are ...described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment constants. This option was selected because it makes use of the Byte Stream Converter infrastructure and possibly gives better bandwidth usage and storage optimization. Processing is done using specialized algorithms running in the Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. This work is addressing in particular the alignment requirements, the needs for track and hit selection, and the performance issues.
Within the ATLAS hierarchical, multi-tier computing infrastructure, the Tier-0 centre at CERN is mainly responsible for prompt processing of the raw data coming from the online DAQ system, to archive ...the raw and derived data on tape, to register the data with the relevant catalogues and to distribute them to the associated Tier-1 centers. The Tier-0 is already fully functional. It has been successfully participating in all cosmic and commissioning data taking since May 2007, and was ramped up to its foreseen full size, performance and throughput for the cosmic (and short single-beam) run periods between July and October 2008. Data and work flows for collision data taking were exercised in several "Full Dress Rehearsals" (FDRs) in the course of 2008. The transition from an expert to a shifter-based system was successfully established in July 2008. This article will give an overview of the Tier-0 system, its data and work flows, and operations model. It will review the operational experience gained in cosmic, commissioning, and FDR exercises during the past year. And it will give an outlook on planned developments and the evolution of the system towards first collision data taking expected now in late Autumn 2009.
Measurements of the forward-backward production asymmetry of heavy quarks in Z decays provide a precise determination of . The asymmetries are sensitive to QCD effects, in particular hard gluon ...radiation. In this paper QCD corrections for and are discussed. The interplay between the experimental techniques used to measure the asymmetries and the QCD effects is investigated using simulated events. A procedure to estimate the correction needed for experimental measurements is proposed, and some specific examples are given.