ALFA: The new ALICE-FAIR software framework Al-Turany, M.; Buncic, P.; Hristov, P. ...
Journal of physics. Conference series,
12/2015, Letnik:
664, Številka:
7
Journal Article
Recenzirano
Odprti dostop
The commonalities between the ALICE and FAIR experiments and their computing requirements led to the development of large parts of a common software framework in an experiment independent way. The ...FairRoot project has already shown the feasibility of such an approach for the FAIR experiments and extending it beyond FAIR to experiments at other facilities1, 2. The ALFA framework is a joint development between ALICE Online- Offline (O2) and FairRoot teams. ALFA is designed as a flexible, elastic system, which balances reliability and ease of development with performance using multi-processing and multithreading. A message- based approach has been adopted; such an approach will support the use of the software on different hardware platforms, including heterogeneous systems. Each process in ALFA assumes limited communication and reliance on other processes. Such a design will add horizontal scaling (multiple processes) to vertical scaling provided by multiple threads to meet computing and throughput demands. ALFA does not dictate any application protocols. Potentially, any content-based processor or any source can change the application protocol. The framework supports different serialization standards for data exchange between different hardware and software languages.
The CBM experiment at the upcoming FAIR accelerator aims to create highest baryon densities in nucleus-nucleus collisions and to explore the properties of super-dense nuclear matter. Event rates of ...10 MHz are needed for high-statistics measurements of rare probes, while event selection requires complex global triggers like secondary vertex search. To meet these demands, the CBM experiment uses self-triggered detector front-ends and a data push readout architecture. The First-level Event Selector (FLES) is the central physics selection system in CBM. It receives all hits and performs online event selection on the 1 TByte/s input data stream. The event selection process requires high-throughput event building and full event reconstruction using fast, vectorized track reconstruction algorithms. The current FLES architecture foresees a scalable high-performance computer. To achieve the high throughput and computation efficiency, all available computing devices will have to be used, in particular FPGAs at the first stages of the system and heterogeneous many-core architectures such as CPUs for efficient track reconstruction. A high-throughput network infrastructure and flow control in the system are other key aspects. In this paper, we present the foreseen architecture of the First-level Event Selector.
ALICE (A Large Ion Collider Experiment) is one of the four major detectors located at the LHC at CERN, focusing on the study of heavy-ion collisions. The ALICE High Level Trigger (HLT) is a compute ...cluster which reconstructs the events and compresses the data in real-time. The data compression by the HLT is a vital part of data taking especially during the heavy-ion runs in order to be able to store the data which implies that reliability of the whole cluster is an important matter. To guarantee a consistent state among all compute nodes of the HLT cluster we have automatized the operation as much as possible. For automatic deployment of the nodes we use Foreman with locally mirrored repositories and for configuration management of the nodes we use Puppet. Important parameters like temperatures, network traffic, CPU load etc. of the nodes are monitored with Zabbix. During periods without beam the HLT cluster is used for tests and as one of the WLCG Grid sites to compute offline jobs in order to maximize the usage of our cluster. To prevent interference with normal HLT operations we separate the virtual machines running the Grid jobs from the normal HLT operation via virtual networks (VLANs). In this paper we give an overview of the ALICE HLT operation in 2016.
Particle physics experiments stretch processing requirements to the limit, requiring the selection of extremely rare events at tens of terabytes per second. A network of 283,392 mixed-signal MIMD ...processors operating in parallel at 17 tbytes/s help physicists interpret data from the world's largest particle accelerator. Particle physics and heavy-ion experiments demand greater integration and fast on-detector signal processing. We believe TRD is the first system to implement complete signal digitizing, filtering, intelligent trigger processing and readout in a single on-detector chip that avoids system noise. TRD uses multiport memories as register file inputs, multiported GRFs, and a global multiport data memory. These components support high-end multiprocessing requirements under tight latency conditions. Multi-ported memories, in particular, make it possible to couple independent data streams' very efficiently. So far, the requirements for this detector have remained largely stable. The main change was to add digital filters relatively late in the design. Of course, the first weeks of measuring actual collisions at unprecedented LHC energies are the true test of our design
Spatially selective deposition of electrically charged microparticles onto integrated circuits that generate electrical fields in programmable patterns using electrodes on their surface was ...previously limited to a pixel pitch of 100
μm. Now, we demonstrate spatially selective deposition onto pixels of 45
μm pitch in experiments on a test chip allowing arbitrary patterns, but being of limited size and of fixed characteristics, complemented by COMSOL simulations. Experiments on a prototype high voltage CMOS chip demonstrate the feasibility of miniaturisation in the first place, imply simulations of interest that cannot be tested experimentally and, conversely, complement the simplified simulation models by reality checks. Using COMSOL for the optimisation of the setup parameters, particles of decreasing average diameter in a number of aerosol and electrical field geometries are simulated with particular attention to minimising contamination (deposition of particles on undesirable locations). Combining these results, the average particle diameter is decreased from 10
μm to less than 3
μm and the deposition voltage is reduced from 100
V to 30
V, when using pixels with a pitch of 45
μm. Optimising these parameters allows for more than quadrupling the spot density compared to the previous chip, on which combinatorial particle deposition with minimal contamination is achieved. Peptide arrays, having been previously shown to be a major application for this method, benefit in particular, as the increase in density from 10,000
pixels/cm
2 to approximately 50,000
pixels/cm
2 promises a significant decrease in cost-per-peptide and amount of test specimens required.
We built high voltage complementary metal oxide semiconductor (CMOS) chips that generate electrical fields on their surface, such that electrically charged microparticles (diameter 10–20
μm on ...average) can be addressed on distinct pixel electrodes according to arbitrary field patterns. Each pixel contains a memory cell in canonical low-voltage CMOS-technology controlling a high voltage (30–100
V) potential area on the top metal layer. Particle transfer with minimal contaminations in less than 10
s for a complete chip was observed for pixels of 100
μm
×
100
μm down to 65
μm
×
65
μm. This allows a new way to create surface modifications on top of CMOS chips without need for additional masks or stamps. Using suitable particles, a chemically modified chip surface, and compatible chemistry, this method can be utilized for self-aligned high-density biopolymer arrays, e.g., peptide arrays. Transfer of microparticles loaded with amino acids for combinatorial peptide synthesis is demonstrated. Successful synthesis of different peptides (octamers) was proven by immunostaining. Based on results obtained by a chip containing pixel areas of different characteristics, a chip for biological applications with 16,384
pixels (10,000
pixel/cm
2) was built. Good homogeneity of peptide synthesis over the chip area was verified by immunostaining.
The large and increasing channel count of modern detectors requires the use of microelectronics. The data rate and signal integrity requirements drive complex electronics to be mounted close to or ...directly on the detectors, possibly even integrating the complete first-level trigger stage. The latest silicon road maps indicate that the integration density of microelectronics will continue to increase during the next decade. However, there are several constraints to be taken into account that cause ramifications with respect to on-detector electronics. For instance, the core voltage will be reduced to below 500
mV, the clock rates will exceed GHz, and the power density will increase further. This article outlines two examples of trigger and readout systems, the ALICE TPC and TRD, which are completely integrated in microchips. The article expands on the expected impact future silicon processes may have on the on-detector integrated signal processing.
Image processing and pattern analysis can evaluate the deposition quality of triboelectrically charged microparticles on charged surfaces. The image processing method presented in this paper aims at ...controlling the quality of peptide arrays generated by particle based solid phase Merrifield combinatorial peptide synthesis. Incorrectly deposited particles are detected before the amino acids therein are coupled to the growing peptide. The calibration of the image acquisition is performed in a supervised training step in which all parameters of the quality analyzing algorithm are learnt given one representative image. Then, the correct deposition pattern is determined by a linear support vector machine. Knowing the pattern, contaminated areas can be detected by comparing the pattern with the actual deposition. Taking into account the resolution of the image acquisition system and its magnification factor, the number and size of contaminating particles can be calculated out of the number of connected foreground pixels.