Since the ALICE experiment began data taking in early 2010, the amount of end user jobs on the AliEn Grid has increased significantly. Presently 1/3 of the 40K CPU cores available to ALICE are ...occupied by jobs submitted by about 400 distinct users, individually or in organized analysis trains. The overall stability of the AliEn middleware has been excellent throughout the 3 years of running, but the massive amount of end-user analysis and its specific requirements and load has revealed few components which can be improved. One of them is the interface between users and central AliEn services (catalogue, job submission system) which we are currently re-implementing in Java. The interface provides persistent connection with enhanced data and job submission authenticity. In this paper we will describe the architecture of the new interface, the ROOT binding which enables the use of a single interface in addition to the standard UNIX-like access shell and the new security-related features.
High Energy Physics (HEP) distributed computing infrastructures require automatic tools to monitor, analyze and react to potential security incidents. These tools should collect and inspect data such ...as resource consumption, logs and sequence of system calls for detecting anomalies that indicate the presence of a malicious agent. They should also be able to perform automated reactions to attacks without administrator intervention. We describe a novel framework that accomplishes these requirements, with a proof of concept implementation for the ALICE experiment at CERN. We show how we achieve a fully virtualized environment that improves the security by isolating services and Jobs without a significant performance impact. We also describe a collected dataset for Machine Learning based Intrusion Prevention and Detection Systems on Grid computing. This dataset is composed of resource consumption measurements (such as CPU, RAM and network traffic), logfiles from operating system services, and system call data collected from production Jobs running in an ALICE Grid test site and a big set of malware samples. This malware set was collected from security research sites. Based on this dataset, we will proceed to develop Machine Learning algorithms able to detect malicious Jobs.
The purpose of this article is to analyze the variation of the tenability criteria (convected heat, radiant heat, toxic gases and smoke visibility) on the main evacuation paths inside of an ...educational building, in case of fire. The analysis is performed by numerical simulation for three different fire scenarios. The fire load density of fire space (an office type room) is the same, but the main combustible material is changed: wood, plastic and polyurethane foam. The analyzed building can house a great number of people (over 600) and its inner atrium could facilitate the spread of smoke and hot gases in case of a real fire. According to the numerical analysis, it has been concluded that the convected heat variation is not a significant one (within the engineering limits), but the radiant heat, toxic gases and smoke visibility are highly dependent on the type of considered combustible materials.
Performance is a critical issue in a production system accommodating hundreds of analysis users. Compared to a local session, distributed analysis is exposed to services and network latencies, remote ...data access and heterogeneous computing infrastructure, creating a more complex performance and efficiency optimization matrix. During the last 2 years, ALICE analysis shifted from a fast development phase to the more mature and stable code. At the same time, the frameworks and tools for deployment, monitoring and management of large productions have evolved considerably too. The ALICE Grid production system is currently used by a fair share of organized and individual user analysis, consuming up to 30% or the available resources and ranging from fully I/O-bound analysis code to CPU intensive correlations or resonances studies. While the intrinsic analysis performance is unlikely to improve by a large factor during the LHC long shutdown (LS1), the overall efficiency of the system has still to be improved by an important factor to satisfy the analysis needs. We have instrumented all analysis jobs with "sensors" collecting comprehensive monitoring information on the job running conditions and performance in order to identify bottlenecks in the data processing flow. This data are collected by the MonALISa-based ALICE Grid monitoring system and are used to steer and improve the job submission and management policy, to identify operational problems in real time and to perform automatic corrective actions. In parallel with an upgrade of our production system we are aiming for low level improvements related to data format, data management and merging of results to allow for a better performing ALICE analysis.
The main context in which abrasive water cutting is used is the reduction of thermal deformation induced by thermal (plasma arc PAC, oxyfuel OFC, laser) of electrothermal (electroerosion EDM) cutting ...methods. Although it is not the cheapest or time-efficient technique it can be used on a wide variety of metallic and non-metallic materials. Among other benefits are the lack of burrs, high precision and improved surface finish, low setup time and stress-free cutting. This leads to no secondary processing required in many other applications. Depending on the material hardness the cutting thickness can reach up to 300 mm. The present study proposes an analysis of high-pressure abrasive water jet cutting of a 19 mm thick plate. The aluminium alloy used in this study was Al-6061-T651. This alloy is being used especially in the aeronautics industry due to is excellent welding properties. The experiments were conducted using multiple input and output factors. The design of experiments (DOE) takes into account input factors and offers models for responses. The study was organised according to response surface methodology, with an I-optimal design type and a quadratic design model. The input factors were: cutting pressure, standoff distance, programmed quality of the cut. The responses analysed were: entrance (Iw) and exit (Ow) width of cut, and taper angle (α). An ANOVA analysis was performed for each response. This interpretation implies the significance (p-value) that the input factors have on the variation of the responses. For Iw and Ow a reduced 2FI model was proposed, while for θ a linear model was suggested. The p-value obtained for each response is smaller than 0.0001, which classifies the models as significant. The ANOVA fit statistics determine the R-squared error between 0.964 and 0.995, meaning that the responses are well defined by the input value variations. This high confidence in the results leads to accurate mathematical models.
The Worldwide LHC Computing Grid relies on the network as a critical part of its infrastructure and therefore needs to guarantee effective network usage and prompt detection and resolution of any ...network issues, including connection failures, congestion, traffic routing, etc. The WLCG Network and Transfer Metrics project aims to integrate and combine all network-related monitoring data collected by the WLCG infrastructure. This includes FTS monitoring information, monitoring data from the XRootD federation, as well as results of the perfSONAR tests. The main challenge consists of further integrating and analyzing this information in order to allow the optimizing of data transfers and workload management systems of the LHC experiments. In this contribution, we present our activity in commissioning WLCG perfSONAR network and integrating network and transfer metrics: We motivate the need for the network performance monitoring, describe the main use cases of the LHC experiments as well as status and evolution in the areas of configuration and capacity management, datastore and analytics, including integration of transfer and network metrics and operations and support.
MAD - Monitoring ALICE Dataflow Barroso, V Chibante; Costa, F; Grigoras, C ...
Journal of physics. Conference series,
01/2015, Volume:
664, Issue:
8
Journal Article
Peer reviewed
Open access
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). ...Following a successful Run 1, which ended in February 2013, the ALICE data acquisition (DAQ) entered a consolidation phase to prepare for Run 2 which will start in the beginning of 2015. A new software tool has been developed by the data acquisition project to improve the monitoring of the experiment's dataflow, from the data readout in the DAQ farm up to its shipment to CERN's main computer centre. This software, called ALICE MAD (Monitoring ALICE Dataflow), uses the MonALISA framework as core module to gather, process, aggregate and distribute monitoring values from the different processes running in the distributed DAQ farm. Data are not only pulled from the data sources to MAD but can also be pushed by dedicated data collectors or the data source processes. A large set of monitored metrics (from the backpressure status on the readout links to event counters in each of the DAQ nodes and aggregated data rates for the whole data acquisition) is needed to provide a comprehensive view of the DAQ status. MAD also injects alarms in the Orthos alarm system whenever abnormal conditions are detected. The MAD web-based GUI uses WebSockets to provide dynamic and on-time status displays for the ALICE shift crew. Designed as a widget-based system, MAD supports an easy integration of new visualization blocks and also customization of the information displayed to the shift crew based on the ALICE activities.
IPv6 Security Babik, M; Chudoba, J; Dewhurst, A ...
Journal of physics. Conference series,
10/2017, Volume:
898, Issue:
10
Journal Article
Peer reviewed
Open access
IPv4 network addresses are running out and the deployment of IPv6 networking in many places is now well underway. Following the work of the HEPiX IPv6 Working Group, a growing number of sites in the ...Worldwide Large Hadron Collider Computing Grid (WLCG) are deploying dual-stack IPv6/IPv4 services. The aim of this is to support the use of IPv6-only clients, i.e. worker nodes, virtual machines or containers. The IPv6 networking protocols while they do contain features aimed at improving security also bring new challenges for operational IT security. The lack of maturity of IPv6 implementations together with the increased complexity of some of the protocol standards raise many new issues for operational security teams. The HEPiX IPv6 Working Group is producing guidance on best practices in this area. This paper considers some of the security concerns for WLCG in an IPv6 world and presents the HEPiX IPv6 working group guidance for the system administrators who manage IT services on the WLCG distributed infrastructure, for their related site security and networking teams and for developers and software engineers working on WLCG applications.
In order to investigate the effect of rotaxane formation on the photophysical and morphological properties of π‐conjugated materials, a new main chain polyrotaxane was synthesized through Suzuki ...coupling between the inclusion complex of 5,5'‐dibromo‐2,2'‐bithiophene with randomly methylated β‐cyclodextrin and 9,9‐dioctylfluorene‐2,7‐trimethylene diborate. Due to rotaxane formation, a blue shift in the absorption spectra as well as in the fluorescence spectra was observed, while the fluorescence quantum yields and fluorescence lifetimes remained unchanged. This study demonstrates that rotaxane formation can alter the electronic and morphological properties of the threaded copolymer, which is of great interest for electronic applications.
The formation of a rotaxane architecture modifies the optoelectronic, morphological, and adhesion properties of fluorene/bithiophene copolymers. This shows that either rotaxane formation causes no deaggregating effect on the copolymers or that the rotaxane copolymer is not prone to aggregation in both solution and thin films. Rotaxane formation leads to improved thermal stability and a blue shift in absorption.
The fraction of Internet traffic carried over IPv6 continues to grow rapidly. IPv6 support from network hardware vendors and carriers is pervasive and becoming mature. A network infrastructure ...upgrade often offers sites an excellent window of opportunity to configure and enable IPv6. There is a significant overhead when setting up and maintaining dual-stack machines, so where possible sites would like to upgrade their services directly to IPv6 only. In doing so, they are also expediting the transition process towards its desired completion. While the LHC experiments accept there is a need to move to IPv6, it is currently not directly affecting their work. Sites are unwilling to upgrade if they will be unable to run LHC experiment workflows. This has resulted in a very slow uptake of IPv6 from WLCG sites. For several years the HEPiX IPv6 Working Group has been testing a range of WLCG services to ensure they are IPv6 compliant. Several sites are now running many of their services as dual-stack. The working group, driven by the requirements of the LHC VOs to be able to use IPv6-only opportunistic resources, continues to encourage wider deployment of dual-stack services to make the use of such IPv6-only clients viable. This paper presents the working group's plan and progress so far to allow sites to deploy IPv6-only CPU resources. This includes making experiment central services dual-stack as well as a number of storage services. The monitoring, accounting and information services that are used by jobs also need to be upgraded. Finally the VO testing that has taken place on hosts connected via IPv6-only is reported.