High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such ...as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
Short compute times are crucial for timely diagnostics in biomedical applications, but lead to a high demand in computing for new and improved imaging techniques. In this book reconfigurable ...computing with FPGAs is discussed as an alternative to multi-core processing and graphics card accelerators. Instead of adjusting the application to the hardware, FPGAs allow the hardware to also be adjusted to the problem. Acceleration of Biomedical Image Processing with Dataflow on FPGAs covers the transformation of image processing algorithms towards a system of deep pipelines that can be executed with very high parallelism. The transformation process is discussed from initial design decisions to working implementations. Two example applications from stochastic localization microscopy and electron tomography illustrate the approach further. Topics discussed in the book include:
Reconfigurable hardware
Dataflow computing
Image processing
Application acceleration
The High-Level-Trigger (HLT) cluster of the ALICE experiment is a computer cluster with about 200 nodes and 20 infrastructure machines. In its current state, the cluster consists of nearly 10 ...different configurations of nodes in terms of installed hardware, software and network structure. In such a heterogeneous environment with a distributed application, information about the actual configuration of the nodes is needed to automatically distribute and adjust the application accordingly. An inventory database provides a unified interface to such information. To be useful, the data in the inventory has to be up to date, complete and consistent. Manual maintenance of such databases is error-prone and data tends to become outdated. The inventory module of the ALICE HLT cluster overcomes these drawbacks by automatically updating the actual state periodically and, in contrast to existing solutions, it allows the definition of a target state for each node. A target state can simply be a fully operational state, i.e. a state without malfunctions, or a dedicated configuration of the node. The target state is then compared to the actual state to detect deviations and malfunctions which could induce severe problems when running the application. The inventory module of the ALICE HLT cluster has been integrated into the monitoring and management framework SysMES in order to use existing functionality like transactionality and monitoring infrastructure. Additionally, SysMES allows to solve detected problems automatically via its rule-system. To describe the heterogeneous environment with all its specifics, like custom hardware, the inventory module uses an object-oriented model which is based on the Common Information Model. The inventory module provides an automatically updated actual state of the cluster, detects discrepancies between the actual and the target state and is able to solve detected problems automatically. This contribution presents the current implementation state of the inventory module as well as the concept for future development.
The ALICE High Level Trigger comprises a large computing cluster, dedicated interfaces and software applications. It allows on-line event reconstruction of the full data stream of the ALICE ...experiment at up to 25 GByte/s. The commissioning campaign has passed an important phase since the startup of the Large Hadron Collider in November 2009. The system has been transferred into continuous operation with focus on the event reconstruction and first simple trigger applications. The paper reports for the first time on the achieved event reconstruction performance in the ALICE central barrel region.
The electron capture in 163Ho experiment – ECHo Gastaldo, L.; Blaum, K.; Chrysalidis, K. ...
The European physical journal. ST, Special topics,
06/2017, Letnik:
226, Številka:
8
Journal Article
Recenzirano
Odprti dostop
Neutrinos, and in particular their tiny but non-vanishing masses, can be considered one of the doors towards physics beyond the Standard Model. Precision measurements of the kinematics of weak ...interactions, in particular of the
3
H β-decay and the
163
Ho electron capture (EC), represent the only model independent approach to determine the absolute scale of neutrino masses. The electron capture in
163
Ho experiment, ECHo, is designed to reach sub-eV sensitivity on the electron neutrino mass by means of the analysis of the calorimetrically measured electron capture spectrum of the nuclide
163
Ho. The maximum energy available for this decay, about 2.8 keV, constrains the type of detectors that can be used. Arrays of low temperature metallic magnetic calorimeters (MMCs) are being developed to measure the
163
Ho EC spectrum with energy resolution below 3 eV FWHM and with a time resolution below 1 μs. To achieve the sub-eV sensitivity on the electron neutrino mass, together with the detector optimization, the availability of large ultra-pure
163
Ho samples, the identification and suppression of background sources as well as the precise parametrization of the
163
Ho EC spectrum are of utmost importance. The high-energy resolution
163
Ho spectra measured with the first MMC prototypes with ion-implanted
163
Ho set the basis for the ECHo experiment. We describe the conceptual design of ECHo and motivate the strategies we have adopted to carry on the present medium scale experiment, ECHo-1K. In this experiment, the use of 1 kBq
163
Ho will allow to reach a neutrino mass sensitivity below 10 eV/
c
2
. We then discuss how the results being achieved in ECHo-1k will guide the design of the next stage of the ECHo experiment, ECHo-1M, where a source of the order of 1 MBq
163
Ho embedded in large MMCs arrays will allow to reach sub-eV sensitivity on the electron neutrino mass.
ALICE HLT High Speed Tracking on GPU Gorbunov, S.; Rohr, D.; Aamodt, K. ...
IEEE transactions on nuclear science,
08/2011, Letnik:
58, Številka:
4
Journal Article
Recenzirano
The on-line event reconstruction in ALICE is performed by the High Level Trigger, which should process up to 2000 events per second in proton-proton collisions and up to 300 central events per second ...in heavy-ion collisions, corresponding to an input data stream of 30 GB/s. In order to fulfill the time requirements, a fast on-line tracker has been developed. The algorithm combines a Cellular Automaton method being used for a fast pattern recognition and the Kalman Filter method for fitting of found trajectories and for the final track selection. The tracker was adapted to run on Graphics Processing Units (GPU) using the NVIDIA Compute Unified Device Architecture (CUDA) framework. The implementation of the algorithm had to be adjusted at many points to allow for an efficient usage of the graphics cards. In particular, achieving a good overall workload for many processor cores, efficient transfer to and from the GPU, as well as optimized utilization of the different memories the GPU offers turned out to be critical. To cope with these problems a dynamic scheduler was introduced, which redistributes the workload among the processor cores. Additionally a pipeline was implemented so that the tracking on the GPU, the initialization and the output processed by the CPU, as well as the DMA transfer can overlap. The GPU tracking algorithm significantly outperforms the CPU version for large events while it entirely maintains its efficiency.