The Fast TracKer (FTK) is an ATLAS trigger upgrade built for full-event, low-latency, high-rate tracking. The FTK core, made of 9U VME boards, performs the most demanding computational task. The ...associative memory board (AMB) serial link processor and the auxiliary card (AUX), plugged on the front and back sides of the same VME slot, constitute the processing unit (PU), which finds tracks using hits from eight layers of the inner detector. The PU works in pipeline with the second stage board (SSB), which finds 12-layer tracks by adding extra hits to the identified tracks. In the designed configuration, 16 PUs and four SSBs are installed in a VME crate. The high power consumption of the AMB, AUX, and SSB (respectively, of about 250, 70, and 160 W per board) required the development of a custom cooling system. Even though the expected power consumption for each VME crate of the FTK system is high compared with a common VME setup, the 8 FTK core crates will use ≈60 kW, which is just a fraction of the power and the space needed for a CPU farm performing the same task. We report on the integration of 32 PUs and eight SSBs inside the FTK system, on the infrastructures needed to run and cool them, and on the tests performed to verify the system processing rate and the temperature stability at a safe value.
The development of a single-photon detector based on a vacuum tube, transmission photocathode, microchannel plate and CMOS pixelated read-out anode is presented. This imager will be capable of ...detecting up to 1 billion photons per second over an area of 7 cm
2
, with simultaneous measurement of position and time with resolutions of about 5 microns and few tens of picosecond, respectively. The detector has embedded pulse-processing electronics with data-driven architecture, based on the Timepix4 ASIC, producing up to 160 Gb/s data that will be handled by a high-throughput FPGA-based external electronics and data acquisition system. These performances will enable significant advances in particle physics, life sciences, quantum optics or other emerging fields where the detection of single photons with excellent timing and position resolutions are simultaneously required.
A high-performance "pattern matching" implementation based on the Associative Memory (AM) system is presented. It is designed to solve the real-time hit-to-track association problem for particles ...produced in high-energy physics experiments at hadron colliders. The processing time of pattern recognition in CPU-based algorithms increases rapidly with the detector occupancy due to the limited computing power and input-output capacity of hardware available on the market. The AM system presented here solves the problem by being able to process even the most complex hadron collider events produced at a rate of 100 kHz with an average latency smaller than 10 μs. The board built for this goal is able to execute ~12 petabyte comparisons per second, with peak power consumption below 250 W, uniformly distributed on the large area of the board.
The associative memory (AM) system of fast tracker (FTK) processor has been designed for the tracking trigger upgrade to the ATLAS detector at the Conseil Europeen Pour La Recherche Nucleaire large ...hadron collider. The system performs pattern matching (PM) using the detector hits of particles in the ATLAS silicon tracker. The AM system is the main processing element of FTK and is mainly based on the use of application-specified integrated circuits (ASICs) (AM chips) designed to execute PM with a high degree of parallelism. It finds track candidates at low resolution which become seeds for a full resolution track fitting. The AM system implementation is based on a collection of large 9U Versa Module Europa (VME) boards, named "serial link processors" (AMBSLPs). On these boards, a huge traffic of data is implemented on a network of 900 2-Gb/s serial links. The complete AM-based processor consumes much less power (~50 kW) than its CPU equivalent and its size is much smaller. The AMBSLP has a power consumption of ~250 W and there will be 16 of them in a crate. This results in unusually large power consumption for a VME crate and the need for complex custom infrastructure in order to have sufficient cooling. This paper reports on the design and testing of the infrastructures needed to run and cool the system which will include 16 AMBSLPs in the same crate, the integration of the AMBSLP inside a first FTK slice, the performance of the produced prototypes (both hardware and firmware), as well as their tests in the global FTK integration. This is an important milestone to be satisfied before the FTK production.
Hog (HDL on Git): An easy system to handle HDL on a git-based repository Biesuz, N.V.; Cieri, D.; Gonnella, F. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
April 2023, 2023-04-00, Letnik:
1049
Journal Article
Recenzirano
Odprti dostop
Coordinating firmware development among many international collaborators is becoming a very widespread problem in high-energy physics. Guaranteeing firmware synthesis reproducibility and assuring ...traceability of binary files is paramount.
We devised Hog - HDL on git (cern.ch/hog), a set of Tcl and Shell scripts that tackles these issues and is deeply integrated with HDL IDEs, such as Xilinx Vivado Design Suite and ISE PlanAhead or Intel Quartus Prime, and all major simulation tools, like Siemens ModelSim or Aldec Riviera Pro.
Git is a very powerful tool and has been chosen as standard by several research institutions, including CERN. Hog perfectly integrates with git to assure an absolute control of HDL source files, constraint files, IDE and simulation settings. It guarantees traceability by automatically embedding the git commit SHA and a numeric version into the binary file, also automatically renamed.
Hog does not rely on any external tool apart from the HDL IDE and git, so it is extremely compatible and does not require any installation. Developers can get quickly up to speed: clone the repository, run the Hog script, work normally with the IDE.
The learning curve to use Hog for the users is minimal. Once the HDL project is created, developers can work on it either using the IDE graphical interface, or with the provided Shell scripts to run the workflow.
Hog works on Windows and Linux, supports IPbus, Sigasi and provides pre-made YAML files to set up a working Continuous Integration on GitLab (Hog-CI) with no additional effort, which runs the HDL implementation for the desired projects. Other features of Hog-CI are the automatic creation of tags and GitLab releases with timing and utilisation reports.
Currently, Hog is successfully used by several firmware projects within the High-Energy Physics community, e.g. in the ATLAS and CMS Phase-II upgrades.
This article documents the muon reconstruction and identification efficiency obtained by the ATLAS experiment for 139
fb
-
1
of
pp
collision data at
s
=
13
TeV collected between 2015 and 2018 during ...Run 2 of the LHC. The increased instantaneous luminosity delivered by the LHC over this period required a reoptimisation of the criteria for the identification of prompt muons. Improved and newly developed algorithms were deployed to preserve high muon identification efficiency with a low misidentification rate and good momentum resolution. The availability of large samples of
Z
→
μ
μ
and
J
/
ψ
→
μ
μ
decays, and the minimisation of systematic uncertainties, allows the efficiencies of criteria for muon identification, primary vertex association, and isolation to be measured with an accuracy at the per-mille level in the bulk of the phase space, and up to the percent level in complex kinematic configurations. Excellent performance is achieved over a range of transverse momenta from 3 GeV to several hundred GeV, and across the full muon detector acceptance of
|
η
|
<
2.7
.
Jet energy scale and resolution measurements with their associated uncertainties are reported for jets using 36–81 fb
-
1
of proton–proton collision data with a centre-of-mass energy of
s
=
13
TeV
...collected by the ATLAS detector at the LHC. Jets are reconstructed using two different input types: topo-clusters formed from energy deposits in calorimeter cells, as well as an algorithmic combination of charged-particle tracks with those topo-clusters, referred to as the ATLAS particle-flow reconstruction method. The anti-
k
t
jet algorithm with radius parameter
R
=
0.4
is the primary jet definition used for both jet types. This result presents new jet energy scale and resolution measurements in the high pile-up conditions of late LHC Run 2 as well as a full calibration of particle-flow jets in ATLAS. Jets are initially calibrated using a sequence of simulation-based corrections. Next, several in situ techniques are employed to correct for differences between data and simulation and to measure the resolution of jets. The systematic uncertainties in the jet energy scale for central jets (
|
η
|
<
1.2
) vary from 1% for a wide range of high-
p
T
jets (
250
<
p
T
<
2000
GeV
), to 5% at very low
p
T
(
20
GeV
) and 3.5% at very high
p
T
(
>
2.5
TeV
). The relative jet energy resolution is measured and ranges from (
24
±
1.5
)% at 20
GeV
to (
6
±
0.5
)% at 300
GeV
.
A
bstract
A search for new-physics resonances decaying into a lepton and a jet performed by the ATLAS experiment is presented. Scalar leptoquarks pair-produced in
pp
collisions at
s
= 13 TeV at the ...Large Hadron Collider are considered using an integrated luminosity of 139 fb
−
1
, corresponding to the full Run 2 dataset. They are searched for in events with two electrons or two muons and two or more jets, including jets identified as arising from the fragmentation of
c
- or
b
-quarks. The observed yield in each channel is consistent with the Standard Model background expectation. Leptoquarks with masses below 1.8 TeV and 1.7 TeV are excluded in the electron and muon channels, respectively, assuming a branching ratio into a charged lepton and a quark of 100%, with minimal dependence on the quark flavour. Upper limits on the aforementioned branching ratio are also given as a function of the leptoquark mass.