The production of protons, anti-protons, neutrons, deuterons and tritons in minimum bias p+C interactions is studied using a sample of 385 734 inelastic events obtained with the NA49 detector at the ...CERN SPS at 158 GeV/c beam momentum. The data cover a phase space area ranging from 0 to 1.9 GeV/c in transverse momentum and in Feynman
x
from −0.8 to 0.95 for protons, from −0.2 to 0.3 for anti-protons and from 0.1 to 0.95 for neutrons. Existing data in the far backward hemisphere are used to extend the coverage for protons and light nuclear fragments into the region of intra-nuclear cascading. The use of corresponding data sets obtained in hadron–proton collisions with the same detector allows for the detailed analysis and model-independent separation of the three principle components of hadronization in p+C interactions, namely projectile fragmentation, target fragmentation of participant nucleons and intra-nuclear cascading.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
New data on the production of protons, anti-protons and neutrons in p+p interactions are presented. The data come from a sample of 4.8 million inelastic events obtained with the NA49 detector at the ...CERN SPS at 158 GeV/c beam momentum. The charged baryons are identified by energy loss measurement in a large TPC tracking system. Neutrons are detected in a forward hadronic calorimeter. Inclusive invariant cross sections are obtained in intervals from 0 to 1.9 GeV/c (0 to 1.5 GeV/c) in transverse momentum and from −0.05 to 0.95 (−0.05 to 0.4) in Feynman
x
for protons (anti-protons), respectively.
p
T
integrated neutron cross sections are given in the interval from 0.1 to 0.9 in Feynman
x
. The data are compared to a wide sample of existing results in the SPS and ISR energy ranges as well as to proton and neutron measurements from HERA and RHIC.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
New data on the production of charged kaons in p+p interactions are presented. The data come from a sample of 4.8 million inelastic events obtained with the NA49 detector at the CERN SPS at 158 GeV/c ...beam momentum. The kaons are identified by energy loss in a large TPC tracking system. Inclusive invariant cross sections are obtained in intervals from 0 to 1.7 GeV/c in transverse momentum and from 0 to 0.5 in Feynman x. Using these data as a reference, a new evaluation of the energy dependence of kaon production, including neutral kaons, is conducted over a range from 3 GeV to
collider energies.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such ...as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote ...data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.
The File Access Monitoring Service (FAMoS) leverages the information stored in the central AliEn file catalogue, which describes every file in a Unix-like directory structure, as well as metadata on ...file location and its replicas. In addition, it uses the access information provided by a set of API servers, used by all Grid clients to access the catalogue. The main functions of FAMoS are to sort the file accesses by logical groups, access time, user and storage element. The collected data identifies rarely used groups of files, as well as those with high popularity over different time periods. This information can be further used to optimize file distribution and replication factors, thus increasing the data processing efficiency. The paper describes the FAMoS structure and user interface and presents the results obtained in one year of service operation.
The LHC experiments are preparing for the precision measurements and further discoveries that will be made possible by higher LHC energies from April 2015 (LHC Run2). The need for simulation, data ...processing and analysis would overwhelm the expected capacity of grid infrastructure computing facilities deployed by the Worldwide LHC Computing Grid (WLCG). To meet this challenge the integration of the opportunistic resources into LHC computing model is highly important. The Tier-1 facility at Kurchatov Institute (NRC-KI) in Moscow is a part of WLCG and it will process, simulate and store up to 10% of total data obtained from ALICE, ATLAS and LHCb experiments. In addition Kurchatov Institute has supercomputers with peak performance 0.12 PFLOPS. The delegation of even a fraction of supercomputing resources to the LHC Computing will notably increase total capacity. In 2014 the development a portal combining a Tier-1 and a supercomputer in Kurchatov Institute was started to provide common interfaces and storage. The portal will be used not only for HENP experiments, but also by other data- and compute-intensive sciences like biology with genome sequencing analysis; astrophysics with cosmic rays analysis, antimatter and dark matter search, etc.
AliEn: ALICE environment on the GRID Bagnasco, S; Betev, L; Buncic, P ...
Journal of physics. Conference series,
07/2008, Letnik:
119, Številka:
6
Journal Article
Recenzirano
Odprti dostop
Starting from mid-2008, the ALICE detector at CERN LHC will collect data at a rate of 4PB per year. ALICE will use exclusively distributed Grid resources to store, process and analyse this data. The ...top-level management of the Grid resources is done through the AliEn (ALICE Environment) system, which is in continuous development since year 2000. AliEn presents several original solutions, which have shown their viability in a number of large exercises of increasing complexity called Data Challenges. This paper describes the AliEn architecture: Job Management, Data Management and UI. The current status of AliEn will be illustrated, as well as the performance of the system during the data challenges. The paper also describes the future AliEn development roadmap.
The MONARC (MOdels of Networked Analysis at Regional Centers) framework has been developed and designed with the aim to provide a tool for realistic simulations of large scale distributed computing ...systems, with a special focus on the Grid systems of the experiments at the CERN LHC. In this paper, we describe a usage of the MONARC framework and tools for a simulation of the job processing performance at an ALICE Tier-2 site.
The ALICE collaboration has developed a production environment (AliEn) that implements the full set of the Grid tools enabling the full offline computational work-flow of the experiment, simulation, ...reconstruction and data analysis, in a distributed and heterogeneous computing environment. In addition to the analysis on the Grid, ALICE uses a set of local interactive analysis facilities installed with the Parallel ROOT Facility (PROOF). PROOF enables physicists to analyze medium-sized (order of 200-300 TB) data sets on a short time scale. The default installation of PROOF is on a static dedicated cluster, typically 200-300 cores. This well-proven approach, has its limitations, more specifically for analysis of larger datasets or when the installation of a dedicated cluster is not possible. Using a new framework called PoD (Proof on Demand), PROOF can be used directly on Grid-enabled clusters, by dynamically assigning interactive nodes on user request. The integration of Proof on Demand in the AliEn framework provides private dynamic PROOF clusters as a Grid service. This functionality is transparent to the user who will submit interactive jobs to the AliEn system.