The File Access Monitoring Service (FAMoS) leverages the information stored in the central AliEn file catalogue, which describes every file in a Unix-like directory structure, as well as metadata on ...file location and its replicas. In addition, it uses the access information provided by a set of API servers, used by all Grid clients to access the catalogue. The main functions of FAMoS are to sort the file accesses by logical groups, access time, user and storage element. The collected data identifies rarely used groups of files, as well as those with high popularity over different time periods. This information can be further used to optimize file distribution and replication factors, thus increasing the data processing efficiency. The paper describes the FAMoS structure and user interface and presents the results obtained in one year of service operation.
AliEn: ALICE environment on the GRID Bagnasco, S; Betev, L; Buncic, P ...
Journal of physics. Conference series,
07/2008, Volume:
119, Issue:
6
Journal Article
Peer reviewed
Open access
Starting from mid-2008, the ALICE detector at CERN LHC will collect data at a rate of 4PB per year. ALICE will use exclusively distributed Grid resources to store, process and analyse this data. The ...top-level management of the Grid resources is done through the AliEn (ALICE Environment) system, which is in continuous development since year 2000. AliEn presents several original solutions, which have shown their viability in a number of large exercises of increasing complexity called Data Challenges. This paper describes the AliEn architecture: Job Management, Data Management and UI. The current status of AliEn will be illustrated, as well as the performance of the system during the data challenges. The paper also describes the future AliEn development roadmap.
The MonALISA (Monitoring Agents using a Large Integrated Services Architecture) framework provides a distributed service system capable of controlling and optimizing large-scale, data-intensive ...applications. An essential part of managing large-scale, distributed data-processing facilities is a monitoring system for computing facilities, storage, networks, and the very large number of applications running on these systems in near realtime. All this monitoring information gathered for all the subsystems is essential for developing the required higher-level services—the components that provide decision support and some degree of automated decisions—and for maintaining and optimizing workflow in large-scale distributed systems. These management and global optimization functions are performed by higher-level agent-based services. We present several applications of MonALISA's higher-level services including optimized dynamic routing, control, data-transfer scheduling, distributed job scheduling, dynamic allocation of storage resource to running jobs and automated management of remote services among a large set of grid facilities.
A series of bismaleimide monomers were used to prepare poly(aminobismaleimide)s in an attempt to achieve polymers with improved properties. Structurally different bismaleimides with ester units were ...synthesized by reaction between 3(4)-maleimidobenzoylchloride with various diphenols. Bismaleimides BMI-3 and BMI-4 were synthesized by reaction of maleic anhydride with diamines respectively. Polymers based on these bismaleimides were prepared by the Michael addition of diamines to bismaleimides. The monomers and polymers were characterized by infrared (IR) and proton nuclear resonance (
1
H-NMR) spectroscopy. Thermal characterization of monomers and polymers was accomplished by differential scanning calorimetry (DSC) and dynamic thermogravimetric analysis (ATG).
The production deployment of IPv6 on WLCG Bernier, J; Campana, S; Chadwick, K ...
Journal of physics. Conference series,
01/2015, Volume:
664, Issue:
5
Journal Article
Peer reviewed
Open access
The world is rapidly running out of IPv4 addresses; the number of IPv6 end systems connected to the internet is increasing; WLCG and the LHC experiments may soon have access to worker nodes and/or ...virtual machines (VMs) possessing only an IPv6 routable address. The HEPiX IPv6 Working Group has been investigating, testing and planning for dual-stack services on WLCG for several years. Following feedback from our working group, many of the storage technologies in use on WLCG have recently been made IPv6-capable. This paper presents the IPv6 requirements, tests and plans of the LHC experiments together with the tests performed on the group's IPv6 test-bed. This is primarily aimed at IPv6-only worker nodes or VMs accessing several different implementations of a global dual-stack federated storage service. Finally the plans for deployment of production dual-stack WLCG services are presented.
The ALICE collaboration has developed a production environment (AliEn) that implements the full set of the Grid tools enabling the full offline computational work-flow of the experiment, simulation, ...reconstruction and data analysis, in a distributed and heterogeneous computing environment. In addition to the analysis on the Grid, ALICE uses a set of local interactive analysis facilities installed with the Parallel ROOT Facility (PROOF). PROOF enables physicists to analyze medium-sized (order of 200-300 TB) data sets on a short time scale. The default installation of PROOF is on a static dedicated cluster, typically 200-300 cores. This well-proven approach, has its limitations, more specifically for analysis of larger datasets or when the installation of a dedicated cluster is not possible. Using a new framework called PoD (Proof on Demand), PROOF can be used directly on Grid-enabled clusters, by dynamically assigning interactive nodes on user request. The integration of Proof on Demand in the AliEn framework provides private dynamic PROOF clusters as a Grid service. This functionality is transparent to the user who will submit interactive jobs to the AliEn system.
Following a previous publication 1, this study aims at investigating the impact of regional affiliations of centres on the organisation of collaboration within the Distributed Computing ALICE ...infrastructure, based on social networks methods. A self-administered questionnaire was sent to all centre managers about support, email interactions and wished collaborations in the infrastructure. Several additional measures, stemming from technical observations were produced, such as bandwidth, data transfers and Internet Round Trip Time (RTT) were also included. Information for 50 centres were considered (60% response rate). Empirical analysis shows that despite the centralisation on CERN, the network is highly organised by regions. The results are discussed in the light of policy and efficiency issues.
With the startup of LHC, the ALICE detector will collect data at a rate that, after two years, will reach 4PB per year. To process such a large amount of data, ALICE has developed AliEn, a ...distributed computing environment, integrated with the WLCG environment. The ALICE environment presents several original solutions, which have shown their viability in a number of large exercises of increasing complexity called ALICE Data Challenges. Within the ALICE distributed computing environment, the AliEn Workload Management Structure was created to submit to the WLCG infrastructure, and has played a crucial role to achieve the mentioned results. ALICE has more than 80 sites distributed all over the world and this WMS together with the operations management structure defined by the experiment has demonstrated a reliability and performance level ready to begin the data taking at the end of the year. In this talk we will focus on the description and current status of the AliEn WMS, emphasizing the last functionalities that have been included to handle from a single entry point the different matchmaking services of WLCG (lcg-RB, gLite WMS) and also the CREAM Computing Element; the latter has been extensively tested by the experiment during summer 2008.
The HEPiX (http://www.hepix.org) IPv6 Working Group has been investigating the many issues which feed into the decision on the timetable for the use of IPv6 (http://www.ietf.org/rfc/rfc2460.txt) ...networking protocols in High Energy Physics (HEP) Computing, in particular in the Worldwide Large Hadron Collider (LHC) Computing Grid (WLCG). RIPE NCC, the European Regional Internet Registry (RIR), ran out ofIPv4 addresses in September 2012. The North and South America RIRs are expected to run out soon. In recent months it has become more clear that some WLCG sites, including CERN, are running short of IPv4 address space, now without the possibility of applying for more. This has increased the urgency for the switch-on of dual-stack IPv4/IPv6 on all outward facing WLCG services to allow for the eventual support of IPv6-only clients. The activities of the group include the analysis and testing of the readiness for IPv6 and the performance of many required components, including the applications, middleware, management and monitoring tools essential for HEP computing. Many WLCG Tier 1/2 sites are participants in the group's distributed IPv6 testbed and the major LHC experiment collaborations are engaged in the testing. We are constructing a group web/wiki which will contain useful information on the IPv6 readiness of the various software components and a knowledge base (http://hepix-ipv6.web.cern.ch/knowledge-base). This paper describes the work done by the working group and its future plans.