KM3NeT-Italy is an INFN project that will develop the central part of a submarine cubic-kilometer neutrino telescope in the Ionian Sea, at about 80 km from the Sicilian coast (Italy). It will use ...hundreds of distributed optical modules to measure the Cherenkov light emitted by high-energy muons, whose signal-to-noise ratio is quite disfavoured. In this contribution the Trigger and Data Acquisition System (TriDAS) developed for the KM3NeT-Italy detector is presented. The "all data to shore" approach is adopted to reduce the complexity of the submarine detector: at the shore station the TriDAS collects, processes and filters all the data coming from the detector, storing triggered events to a permanent storage for subsequent analysis. Due to the large optical background in the sea from 40K decays and bioluminescence, the throughput from the sea can range up to 30 Gbps. This puts strong constraints on the performances of the TriDAS processes and the related network infrastructure.
The LHCb experiment will undergo a major upgrade during the second long shutdown (2019 - 2020). The upgrade will concern both the detector and the Data Acquisition system, which are to be rebuilt in ...order to optimally exploit the foreseen higher event rate. The Event Builder is the key component of the DAQ system, for it gathers data from the sub-detectors and builds up the whole event. The Event Builder network has to manage an incoming data rate of 32 Tb/s from a 40 MHz bunch-crossing frequency, with a cardinality of about 500 nodes. In this contribution we present an Event Builder implementation based on the InfiniBand network technology. This software relies on the InfiniBand verbs, which offers a user space interface to employ the Remote Direct Memory Access capabilities provided by the InfiniBand network devices. We will present the performance of the software on a cluster connected with 100 Gb/s InfiniBand network.
The LHCb experiment at CERN has decided to optimise its physics reach by removing the first level hardware trigger for 2020 and beyond. In addition to requiring fully redesigned front-end electronics ...this design creates interesting challenges for the data-acquisition and the rest of the online computing system. Such a system can only be realized within realistic cost using as much off-the-shelf hardware as possible. Relevant technologies evolve very quickly and thus the system design is architecture-centred and tries to avoid to depend too much on specific technologies. In this paper we describe the design, the motivations for various choices and the current favoured options for the implementation, and the status of the R&D. We will cover the back-end readout, which contains the only custom-made component, the event-building, the event-filter infrastructure, and storage.
KM3NeT-Italia is an INFN project supported with Italian PON fundings for building the core of the Italian node of the KM3NeT neutrino telescope. The detector, made of 700 10′′ Optical Modules (OMs) ...lodged along 8 vertical structures called towers, will be deployed starting from fall 2015 at the KM3NeT-Italy site, about 80 km off Capo Passero, Italy, 3500 m deep. The all data to shore approach is used to reduce the complexity of the submarine detector, demanding for an on-line trigger integrated in the data acquisition system running in the shore station, called TriDAS. Due to the large optical background in the sea from 40K decays and bioluminescence, the throughput from the underwater detector can range up to 30 Gbps. This puts strong constraints on the design and performances of the TriDAS and of the related network infrastructure. In this contribution the technology behind the implementation of the TriDAS infrastructure is reviewed, focusing on the relationship between the various components and their performances. The modular design of the TriDAS, which allows for its scalability up to a larger detector than the 8-tower configuration is also discussed.
A recent trend in scientific computing is the increasingly important role of co-processors, originally built to accelerate graphics rendering, and now used for general high-performance computing. The ...INFN Computing On Knights and Kepler Architectures (COKA) project focuses on assessing the suitability of co-processor boards for scientific computing in a wide range of physics applications, and on studying the best programming methodologies for these systems. Here we present in a comparative way our results in porting a Lattice Boltzmann code on two state-of-the-art accelerators: the NVIDIA K20X, and the Intel Xeon-Phi. We describe our implementations, analyze results and compare with a baseline architecture adopting Intel Sandy Bridge CPUs.
This paper describes the design and the current state of implementation of an infrastructure made available to software developers within the Italian National Institute for Nuclear Physics (INFN) to ...support and facilitate their daily activity. The infrastructure integrates several tools, each providing a well-identified function: project management, version control system, continuous integration, dynamic provisioning of virtual machines, efficiency improvement, knowledge base. When applicable, access to the services is based on the INFN-wide Authentication and Authorization Infrastructure. The system is being installed and progressively made available to INFN users belonging to tens of sites and laboratories and will represent a solid foundation for the software development efforts of the many experiments and projects that see the involvement of the Institute. The infrastructure will be beneficial especially for small- and medium-size collaborations, which often cannot afford the resources, in particular in terms of know-how, needed to set up such services.
A job submission and management tool is one of the necessary components in any distributed computing system. Such a tool should provide a user-friendly interface for physics production groups and ...ordinary analysis users to access heterogeneous computing resources, without requiring knowledge of the underlying grid middleware. Ganga, with its common framework and customizable plug-in structure is such a tool. This paper will describe how experiment-specific job management tools for BESIII and SuperB were developed as Ganga plug-ins to meet their own unique requirements, discuss and contrast their challenges met and lessons learned.
The SuperB asymmetric energy e+e− collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavour sector of the ...Standard Model. SuperB distributed computing group performed a detailed evaluation of DIRAC Distributed Infrastructure for use in the SuperB experiment based on the two use cases: End User Analysis and Monte Carlo Production. Test aims to evaluate DIRAC capabilities to manage both gLite and OSG sites, File Catalog management, job and data management features in SuperB realistic use cases.
The SuperB asymmetric energy e+e−- collider and detector to be built at the newly founded Nicola Cabibbo Lab will provide a uniquely sensitive probe of New Physics in the flavor sector of the ...Standard Model. Studying minute effects in the heavy quark and heavy lepton sectors requires a data sample of 75 ab−-1 and a luminosity target of 1036 cm−-2 s−-1. This luminosity translate in the requirement of storing more than 50 PByte of additional data each year, making SuperB an interesting challenge to the data management infrastructure, both at site level as at Wide Area Network level. A new Tier1, distributed among 3 or 4 sites in the south of Italy, is planned as part of the SuperB computing infrastructure. Data storage is a relevant topic whose development affects the way to configure and setup storage infrastructure both in local computing cluster and in a distributed paradigm. In this work we report the test on the software for data distribution and data replica focusing on the experiences made with Hadoop and GlusterFS.