Summary
Invasive fungal infections (IFI) of the Central Nervous System (IFI‐CNS) and Paranasal Sinuses (IFI‐PS) are rare, life‐threatening infections in haematologic patients, and their management ...remains a challenge despite the availability of new diagnostic techniques and novel antifungal agents. In addition, analyses of large cohorts of patients focusing on these rare IFI are still lacking. Between January 2010 and December 2016, 89 consecutive cases of Proven (53) or Probable (36) IFI‐CNS (71/89) and IFI‐PS (18/89) were collected in 34 haematological centres. The median age was 40 years (range 5‐79); acute leukaemia was the most common underlying disease (69%) and 29% of cases received a previous allogeneic stem cell transplant. Aspergillus spp. were the most common pathogens (69%), followed by mucormycetes (22%), Cryptococcus spp. (4%) and Fusarium spp. (2%). The lung was the primary focus of fungal infection (48% of cases). The nervous system biopsy was performed in 10% of IFI‐CNS, and a sinus biopsy was performed in 56% of IFI‐PS (P = 0.03). The Galactomannan test on cerebrospinal fluid has been performed in 42% of IFI‐CNS (30/71), and it was positive in 67%. Eighty‐four pts received a first‐line antifungal therapy with Amphotericine B in 58% of cases, Voriconazole in 31% and both in 11%. Moreover, 58% of patients received 2 or more lines of therapy and 38% were treated with a combination of 2 or more antifungal drugs. The median duration of antifungal therapy was 60 days (range 5‐835). A surgical intervention was performed in 26% of cases but only 10% of IFI‐CNS underwent neurosurgical intervention. The overall response rate to antifungal therapy (complete or partial response) was 57%, and 1‐year overall survival was 32% without significant differences between IFI‐CNS and IFI‐PS. The overall mortality was 69% but the IFI attributable mortality was 33%. Mortality of IFI‐CNS/PS remains high but, compared to previous historical data, it seems to be reduced probably due to the availability of newer antifungal drugs. The results arising from this large contemporary cohort of cases may allow a more effective diagnostic and therapeutic management of these very rare IFI complications in haematologic patients.
The Open Computing Cluster for Advanced data Manipulation (OCCAM) is a multipurpose flexible HPC cluster designed and operated by a collaboration between the University of Torino and the Sezione di ...Torino of the Istituto Nazionale di Fisica Nucleare. It is aimed at providing a flexible, reconfigurable and extendable infrastructure to cater to a wide range of different scientific computing use cases, including ones from solid-state chemistry, high-energy physics, computer science, big data analytics, computational biology, genomics and many others. Furthermore, it will serve as a platform for R&D activities on computational technologies themselves, with topics ranging from GPU acceleration to Cloud Computing technologies. A heterogeneous and reconfigurable system like this poses a number of challenges related to the frequency at which heterogeneous hardware resources might change their availability and shareability status, which in turn affect methods and means to allocate, manage, optimize, bill, monitor VMs, containers, virtual farms, jobs, interactive bare-metal sessions, etc. This work describes some of the use cases that prompted the design and construction of the HPC cluster, its architecture and resource provisioning model, along with a first characterization of its performance by some synthetic benchmark tools and a few realistic use-case tests.
Background
Fungal infections are still a relevant challenge for clinicians involved in the cure of patients with cancer. We retrospectively reviewed charts of hospitalized patients with ...haematological malignancies (HMs), in which a documented fungaemia was diagnosed between January 2011 and December 2015 at 28 adult and 6 paediatric Italian Hematology Departments.
Methods
During the study period, we recorded 215 fungal bloodstream infections (BSI). Microbiological analyses documented that BSI was due to moulds in 17 patients (8%) and yeasts in 198 patients (92%), being Candida spp identified in 174 patients (81%).
Results
Mortality rates were 70% and 39% for mould and yeast infections, respectively. Infection was the main cause of death in 53% of the mould and 18% of the yeast groups. At the multivariate analysis, ECOG ≥ 2 and septic shock were significantly associated with increased mortality, and removal of central venous catheter (CVC) survival was found to be protective. When considering patients with candidemia only, ECOG ≥ 2 and removal of CVC were statistically associated with overall mortality.
Conclusions
Although candidemia represents a group of BSI with a good prognosis, its risk factors largely overlap with those identified for all fungaemias, even though the candidemia‐related mortality is lower when compared to other fungal BSI. Management of fungal BSI is still a complex issue, in which both patients and disease characteristics should be focused to address a personalized approach.
In the ideal limit of infinite resources, multi-tenant applications are able to scale in/out on a Cloud driven only by their functional requirements. While a large Public Cloud may be a reasonable ...approximation of this condition, small scientific computing centres usually work in a saturated regime. In this case, an advanced resource allocation policy is needed in order to optimize the use of the data centre. The general topic of advanced resource scheduling is addressed by several components of the EU-funded INDIGO-DataCloud project. In this contribution, we describe the FairShare Scheduler Service (FaSS) for OpenNebula (ONE). The service must satisfy resource requests according to an algorithm which prioritizes tasks according to an initial weight and to the historical resource usage of the project. The software was designed to be less intrusive as possible in the ONE code. We keep the original ONE scheduler implementation to match requests to available resources, but the queue of pending jobs to be processed is the one ordered according to priorities as delivered by the FaSS. The FaSS implementation is still being finalized and in this contribution we describe the functional and design requirements the module should satisfy, as well as its high-level architecture.
The private Cloud at the Torino INFN computing centre offers IaaS services to different scientific computing applications. The infrastructure is managed with the OpenNebula cloud controller. The main ...stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BES-III collaboration, plus an increasing number of other small tenants. Besides keeping track of the usage, the automation of dynamic allocation of resources to tenants requires detailed monitoring and accounting of the resource usage. As a first investigation towards this, we set up a monitoring system to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the Elasticsearch, Logstash and Kibana stack. In the current implementation, the heterogeneous accounting information is fed to different MySQL databases and sent to Elasticsearch via a custom Logstash plugin. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service, which is also used for other accounting purposes. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BES-III virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. Each of these three cases is indexed separately in Elasticsearch. We are now starting to consider dismissing the intermediate level provided by the SQL database and evaluating a NoSQL option as a unique central database for all the monitoring information. We setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools.
This paper describes the achievements of the H2020 project INDIGO-DataCloud. The project has provided e-infrastructures with tools, applications and cloud framework enhancements to manage the ...demanding requirements of scientific communities, either locally or through enhanced interfaces. The middleware developed allows to federate hybrid resources, to easily write, port and run scientific applications to the cloud. In particular, we have extended existing PaaS (Platform as a Service) solutions, allowing public and private e-infrastructures, including those provided by EGI, EUDAT, and Helix Nebula, to integrate their existing services and make them available through AAI services compliant with GEANT interfederation policies, thus guaranteeing transparency and trust in the provisioning of such services. Our middleware facilitates the execution of applications using containers on Cloud and Grid based infrastructures, as well as on HPC clusters. Our developments are freely downloadable as open source components, and are already being integrated into many scientific applications.
Elastic cloud computing applications, i.e. applications that automatically scale according to computing needs, work on the ideal assumption of infinite resources. While large public cloud ...infrastructures may be a reasonable approximation of this condition, scientific computing centres like WLCG Grid sites usually work in a saturated regime, in which applications compete for scarce resources through queues, priorities and scheduling policies, and keeping a fraction of the computing cores idle to allow for headroom is usually not an option. In our particular environment one of the applications (a WLCG Tier-2 Grid site) is much larger than all the others and cannot autoscale easily. Nevertheless, other smaller applications can benefit of automatic elasticity; the implementation of this property in our infrastructure, based on the OpenNebula cloud stack, will be described and the very first operational experiences with a small number of strategies for timely allocation and release of resources will be discussed.
A cloud-based Virtual Analysis Facility (VAF) for the ALICE experiment at the LHC has been deployed in Bari. Similar facilities are currently running in other Italian sites with the aim to create a ...federation of interoperating farms able to provide their computing resources for interactive distributed analysis. The use of cloud technology, along with elastic provisioning of computing resources as an alternative to the grid for running data intensive analyses, is the main challenge of these facilities. One of the crucial aspects of the user-driven analysis execution is the data access. A local storage facility has the disadvantage that the stored data can be accessed only locally, i.e. from within the single VAF. To overcome such a limitation a federated infrastructure, which provides full access to all the data belonging to the federation independently from the site where they are stored, has been set up. The federation architecture exploits both cloud computing and XRootD technologies, in order to provide a dynamic, easy-to-use and well performing solution for data handling. It should allow the users to store the files and efficiently retrieve the data, since it implements a dynamic distributed cache among many datacenters in Italy connected to one another through the high-bandwidth national network. Details on the preliminary architecture implementation and performance studies are discussed.
The INFN computing centre in Torino hosts a private Cloud, which is managed with the OpenNebula cloud controller. The infrastructure offers Infrastructure-as-a-Service (IaaS) and ...Platform-as-a-Service (PaaS) services to different scientific computing applications. The main stakeholders of the facility are a grid Tier-2 site for the ALICE collaboration at LHC, an interactive analysis facility for the same experiment and a grid Tier-2 site for the BESIII collaboration, plus an increasing number of other small tenants. The dynamic allocation of resources to tenants is partially automated. This feature requires detailed monitoring and accounting of the resource usage. We set up a monitoring framework to inspect the site activities both in terms of IaaS and applications running on the hosted virtual instances. For this purpose we used the ElasticSearch, Logstash and Kibana (ELK) stack. The infrastructure relies on a MySQL database back-end for data preservation and to ensure flexibility to choose a different monitoring solution if needed. The heterogeneous accounting information is transferred from the database to the ElasticSearch engine via a custom Logstash plugin. Each use-case is indexed separately in ElasticSearch and we setup a set of Kibana dashboards with pre-defined queries in order to monitor the relevant information in each case. For the IaaS metering, we developed sensors for the OpenNebula API. The IaaS level information gathered through the API is sent to the MySQL database through an ad-hoc developed RESTful web service. Moreover, we have developed a billing system for our private Cloud, which relies on the RabbitMQ message queue for asynchronous communication to the database and on the ELK stack for its graphical interface. The Italian Grid accounting framework is also migrating to a similar set-up. Concerning the application level, we used the Root plugin TProofMonSenderSQL to collect accounting data from the interactive analysis facility. The BESIII virtual instances used to be monitored with Zabbix, as a proof of concept we also retrieve the information contained in the Zabbix database. In this way we have achieved a uniform monitoring interface for both the IaaS and the scientific applications, mostly leveraging off-the-shelf tools. At present, we are working to define a model for monitoring-as-a-service, based on the tools described above, which the Cloud tenants can easily configure to suit their specific needs.