The CREAM CE implements a Grid job management service available to end users and to other higher level Grid job submission services. It allows the submission, management and monitoring of ...computational jobs to local resource management systems. CREAM, which is part of the gLite Grid middleware, is available in the EGI production Grid where it is used by several user communities in different job submission scenarios. In this paper, after a quick description of the CREAM CE architecture and functionality, we report on the status of this Grid service, focusing on the results, feedback and issues that had to be addressed. We also discuss about its integration with other job submission services, in particular the gLite Workload Management System. The planned future activities, concerning the maintenance and evolution of the CREAM CE, are reported as well.
With the advent of the recent European Union (EU) funded projects aimed at achieving an open, coordinated and proactive collaboration among the European communities that provide distributed computing ...services, more strict requirements and quality standards will be asked to middleware providers. Such a highly competitive and dynamic environment, organized to comply a business-oriented model, has already started pursuing quality criteria, thus requiring to formally define rigorous procedures, interfaces and roles for each step of the software life-cycle. This will ensure quality-certified releases and updates of the Grid middleware. In the European Middleware Initiative (EMI), the release management for one or more components will be organized into Product Team (PT) units, fully responsible for delivering production ready, quality-certified software and for coordinating each other to contribute to the EMI release as a whole. This paper presents the certification process, with respect to integration, installation, configuration and testing, adopted at INFN by the Product Team responsible for the gLite Web-Service based Computing Element (CREAM CE) and for the Workload Management System (WMS). The used resources, the testbeds layout, the integration and deployment methods, the certification steps to provide feedback to developers and to grant quality results are described.
One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single ...consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all built and tested using different tools and dedicated services. The software, millions of lines of code, is written in several programming languages and supports multiple platforms. Therefore a viable solution ought to be able to build and test applications on multiple programming languages using common dependencies on all selected platforms. It should, in addition, package the resulting software in formats compatible with the popular Linux distributions, such as Fedora and Debian, and store them in repositories from which all EMI software can be accessed and installed in a uniform way. Despite this highly heterogeneous initial situation, a single common solution, with the aim of quickly automating the integration of the middleware products, had to be selected and implemented in a few months after the beginning of the EMI project. Because of the previous knowledge and the short time available in which to provide this common solution, the ETICS service, where the gLite middleware was already built for years, was selected. This contribution describes how the team in charge of providing a common EMI build and packaging infrastructure to the whole project has developed a homogeneous solution for releasing and packaging the EMI components from the initial set of tools used by the earlier middleware projects. An important element of the presentation is the developers experience and feedback on converging on ETICS and on the on-going work in order to finally add more widely used and supported build and packaging solutions of the Linux platforms
One of the major goals of the EMI (European Middleware Initiative) project is the integration of several components of the pre-existing middleware (ARC, gLite, UNICORE and dCache) into a single ...consistent set of packages with uniform distributions and repositories. Those individual middleware projects have been developed in the last decade by tens of development teams and before EMI were all built and tested using different tools and dedicated services. The software, millions of lines of code, is written in several programming languages and supports multiple platforms. Therefore a viable solution ought to be able to build and test applications on multiple programming languages using common dependencies on all selected platforms. It should, in addition, package the resulting software in formats compatible with the popular Linux distributions, such as Fedora and Debian, and store them in repositories from which all EMI software can be accessed and installed in a uniform way. Despite this highly heterogeneous initial situation, a single common solution, with the aim of quickly automating the integration of the middleware products, had to be selected and implemented in a few months after the beginning of the EMI project. Because of the previous knowledge and the short time available in which to provide this common solution, the ETICS service, where the gLite middleware was already built for years, was selected. This contribution describes how the team in charge of providing a common EMI build and packaging infrastructure to the whole project has developed a homogeneous solution for releasing and packaging the EMI components from the initial set of tools used by the earlier middleware projects. An important element of the presentation is the developers experience and feedback on converging on ETICS and on the on-going work in order to finally add more widely used and supported build and packaging solutions of the Linux platforms
The High Throughput Computing paradigm typically involves a scenario whereby a given, estimated processing power is made available and sustained by the computing environment over a medium/long period ...of time. As a consequence, the performance goals are in general targeted at maximizing resource utilization to obtain the expected throughput, rather than minimizing run time for individual jobs. This does not mean that optimal resource selection through adequate workload management is not desired nor effective, nonetheless, relatively small and pre-assessed percentages of suboptimal choices or unexpected events can be tolerated. However, there are use-cases, among the HEP community, for which the described model does not immediately fit. This paper deals with the workload needs primarily driven by the Collider Detector at Fermilab (CDF) experimental collaboration. In particular, the CDF analysis facility (CAF) typically operates by splitting its computations into so-called sections, which can be seen as sets of uniform and independent jobs. Processing a section cannot be considered completed until all _its jobs have been successfully executed, thus requiring a Minimum Completion Time (MCT) dynamic scheduling policy where not even a single job should lay in non-terminal Grid states. A significant part of the CDF analysis is processed on the European Grid infrastructure through the gLite Workload Management System (WMS) 2. This paper describes the design enhancements and ranking algorithms the WMS has been provided with to implement an adaptive scheduling policy to minimise MCT. Case study, outlined approach and first results are presented.
The four High Energy Physics (HEP) detectors at the Large Hadron Collider (LHC) at the European Organization for Nuclear Research (CERN) are among the most important experiments where the National ...Institute of Nuclear Physics (INFN) is being actively involved. A Grid infrastructure of the World LHC Computing Grid (WLCG) has been developed by the HEP community leveraging on broader initiatives (e.g. EGEE in Europe, OSG in northen America) as a framework to exchange and maintain data storage and provide computing infrastructure for the entire LHC community. INFN-CNAF in Bologna hosts the Italian Tier-1 site, which represents the biggest italian center in the WLCG distributed computing. In the first part of this paper we will describe on the building of the Italian Tier-1 to cope with the WLCG computing requirements focusing on some peculiarities; in the second part we will analyze the INFN-CNAF contribution for the developement of the grid middleware, stressing in particular the characteristics of the Virtual Organization Membership Service (VOMS), the de facto standard for authorization on a grid, and StoRM, an implementation of the Storage Resource Manager (SRM) specifications for POSIX file systems. In particular StoRM is used at INFN-CNAF in conjunction with General Parallel File System (GPFS) and we are also testing an integration with Tivoli Storage Manager (TSM) to realize a complete Hierarchical Storage Management (HSM).
•Characterization the quality versus cost trade-off of Learning-to-Rank models.•QuickRank: a public-domain Learning-to-Rank learning and evaluation framework.•a new measure, named AuQC, for the ...evaluation of LtR algorithms.
Learning-to-Rank (LtR) techniques leverage machine learning algorithms and large amounts of training data to induce high-quality ranking functions. Given a set of documents and a user query, these functions are able to precisely predict a score for each of the documents, in turn exploited to effectively rank them. Although the scoring efficiency of LtR models is critical in several applications – e.g., it directly impacts on response time and throughput of Web query processing – it has received relatively little attention so far.
The goal of this work is to experimentally investigate the scoring efficiency of LtR models along with their ranking quality. Specifically, we show that machine-learned ranking models exhibit a quality versus efficiency trade-off. For example, each family of LtR algorithms has tuning parameters that can influence both effectiveness and efficiency, where higher ranking quality is generally obtained with more complex and expensive models. Moreover, LtR algorithms that learn complex models, such as those based on forests of regression trees, are generally more expensive and more effective than other algorithms that induce simpler models like linear combination of features.
We extensively analyze the quality versus efficiency trade-off of a wide spectrum of state-of-the-art LtR, and we propose a sound methodology to devise the most effective ranker given a time budget. To guarantee reproducibility, we used publicly available datasets and we contribute an open source C++ framework providing optimized, multi-threaded implementations of the most effective tree-based learners: Gradient Boosted Regression Trees (GBRT), Lambda-Mart (λ-MART), and the first public-domain implementation of Oblivious Lambda-Mart (Ωλ-MART), an algorithm that induces forests of oblivious regression trees.
We investigate how the different training parameters impact on the quality versus efficiency trade-off, and provide a thorough comparison of several algorithms in the quality-cost space. The experiments conducted show that there is not an overall best algorithm, but the optimal choice depends on the time budget.
Background: Femoral arteries are the preferred site of peripheral cannulation for arterial inflow in type A aortic dissection operations. The presence of aortoiliac aneurysms, severe peripheral ...occlusive disease, atherosclerosis of the femoral vessels, and distal extension of the aortic dissection may preclude their utilization. Axillary artery cannulation may represent a valid alternative in these circumstances.
Methods: Between January 15, 1989, and August 20, 1998, in our institution, 22 of 152 operations (14.4%) for acute type A aortic dissection were performed with the use of the axillary artery for the arterial inflow. Axillary artery cannulation was undertaken in the presence of femoral arteries bilaterally compromised by dissection in 12 patients (54.5%), abdominal aorta and peripheral aneurysm in 5 patients (22.7%), severe atherosclerosis of both femoral arteries in 3 patients (13.6%), and aortoiliac occlusive disease in 2 patients (9.1%). In all patients, distal anastomosis was performed with an open technique after deep hypothermic circulatory arrest. Retrograde cerebral perfusion was used in 9 patients (40.9%).
Results: Axillary artery cannulation was successful in all patients. The left axillary artery was cannulated in 20 patients (90.9%), and the right axillary artery was cannulated in 2 patients (9.1%). Axillary artery cannulation followed an attempt of femoral artery cannulation in 15 patients (68.2%). All patients survived the operation, and no patient had a cerebrovascular accident. No axillary artery thrombosis, no brachial plexus injury, and no intraoperative malperfusion were recorded in this series. Two patients (9.1%) died in the hospital of complications not related to axillary artery cannulation.
Conclusions: In patients with type A aortic dissection in whom femoral arteries are acutely or chronically diseased, axillary artery cannulation represents a safe and effective means of providing arterial inflow during cardiopulmonary bypass. (J Thorac Cardiovasc Surg 1999;118:324-9)
Abstract Background The Adonhers (aged donor heart rescue by stress-echo protocol) Project was created to resolve the current shortage of donor hearts. One of the great limits of stress echo is the ...operator dependency. Speckle-tracking echocardiography (STE), offering a quantitative objective analysis of myocardial deformation, may help to overcome this limit. This study aimed to verify feasibility of a stress-strain echo analysis in selection of aged donor hearts for heart transplant. Methods From February 2014 to October 2015, 22 marginal candidate donors (16 men) ages 58 ± 4 years were initially enrolled. After legal declaration of brain death, all marginal donors underwent bedside echocardiography, with baseline and (when resting echocardiography was normal) dipyridamole (0.84 mg/kg in 6 minutes) stress echo. In all patients, left ventricular (LV) longitudinal myocardial deformation was obtained by STE in the 4-, 2-, and 3-chamber views, obtaining the average global longitudinal strain (GLS). GLS was assessed at baseline and at the peak of stress echo. Results Baseline echocardiography showed wall motion abnormalities in 9 patients (excluded from donation). Stress echocardiography was performed in the remaining 13 patients. Results were normal in 8, who were uneventfully transplanted in marginal recipients. Stress results were abnormal in 5 (excluded from donation). STE was obtained in all cases (100% feasibility) and ΔGLS was significantly different between normal and pathological stress-echo (+13.2 ± 5.2 versus −6.1% ± 3.1%, P = .0001, respectively). Conclusions STE showed an excellent feasibility in analysis of LV myocardial longitudinal strain at baseline and at the peak of stress echo of marginal heart donors. Further experience is needed to confirm STE as a valuable additional mean to better interpret stress echo in marginal donors.
Mycophenolate mofetil (MMF) has proved to be an efficacious and safe therapy in adult lupus nephritis. Recently, this drug has been suggested as a possible new alternative treatment also for ...juvenile-onset SLE (juvenile-SLE). A multicenter study has been performed to evaluate the efficacy and safety of MMF in controlling the disease activity in children and adolescents with juvenile-SLE. Our results show that MMF was effective in reducing the disease activity or as a steroid-sparing agent in 14 of 26 patients (54%), stabilised the disease in 8 (31%) and was ineffective in 4 (15%). In particular, in patients without renal involvement, a good response was registered in 9 of 13 patients (69%). Among those patients with renal involvement, MMF was effective in 5 of 13 patients (38%), partially effective in 4 (31%) and ineffective in 4 (31%). No severe side effects have been observed; only two patients stopped the drug because of severe diarrhoea and abdominal pain. With the limits of a retrospective study, MMF seems to be effective and safe for the treatment of juvenile-SLE, especially in patients with no renal involvement.