With the increased pressure on computing brought by the higher energy and luminosity from the LHC in Run 2, CMS Computing Operations expects to require the ability to utilize opportunistic resources ...resources not owned by, or a priori configured for CMS to meet peak demands. In addition to our dedicated resources we look to add computing resources from non CMS grids, cloud resources, and national supercomputing centers. CMS uses the HTCondor/glideinWMS job submission infrastructure for all its batch processing, so such resources will need to be transparently integrated into its glideinWMS pool. Bosco and parrot wrappers are used to enable access and bring the CMS environment into these non CMS resources. Here we describe our strategy to supplement our native capabilities with opportunistic resources and our experience so far using them.
As multidrug and pan-resistance among Enterobacterales continue to increase, there is an urgent need for more therapeutic options to treat these infections. New β-lactam and β-lactam inhibitor (BLI) ...combinations have a broad spectrum of activity, but those currently approved do not provide coverage against isolates harboring metallo-β-lactamases (MBL). Aztreonam (ATM) and avibactam (AVI) in combination (ATM/AVI; AVI at 4 μg/mL fixed concentration) provides a similarly broad range of activity while maintaining activity against MBL-producing isolates. The in vitro susceptibility testing of ATM/AVI by standard methods was evaluated during development. This study investigated the impact of nonstandard testing conditions on the activity of ATM/AVI as observed during broth microdilution testing as well as the equivalency between agar dilution and broth microdilution MIC values when testing a diverse panel of Enterobacterales (
= 201). Nonstandard test conditions evaluated included inoculum density, atmosphere of incubation, media pH, varied medium cation concentrations, incubation time, varied serum concentrations, testing in pooled urine instead of media, addition of blood to the media, and the presence of surfactant. Generally, apart from low pH and high inoculum density, nonstandard testing parameters did not affect ATM/AVI broth microdilution MIC values. Correlation of MIC values obtained by agar dilution and broth microdilution resulted in an essential agreement of 97.0% for all tested Enterobacterales. Variation of standard testing conditions had little impact on broth microdilution MIC values for ATM/AVI. The correlation between broth microdilution and agar dilution MICs suggests both methods are reliable for determination of ATM/AVI MIC values.
Increasing antibiotic resistance and emergence of pan-resistant isolates threaten the ability to control infections and to provide many other medical interventions such as surgery and chemotherapy, among others. New therapies are required to control emerging resistance mechanisms, including the increase in metallo-β-lactamases. Some new antibiotic combinations provide coverage against highly resistant isolates but are unable to target organisms that produce metallo-β-lactamases. Aztreonam in combination with avibactam provides a broad spectrum of activity against highly resistant isolates that also targets metallo-β-lactamase-producing organisms. An important part of drug development is the ability for clinical labs to determine the susceptibility of isolates to the antimicrobial. This manuscript investigates the in vitro susceptibility testing of aztreonam/avibactam with nonstandard testing conditions and a correlation study between broth microdilution and agar dilution against clinical isolates encoding a variety of resistance mechanisms. Overall, aztreonam/avibactam was generally unaffected by changes in testing conditions and showed strong agar/broth correlation.
Abstract
Particle accelerators are an important tool to study the fundamental properties of elementary particles. Currently the highest energy accelerator is the LHC at CERN, in Geneva, Switzerland. ...Each of its four major detectors, such as the CMS detector, produces dozens of Petabytes of data per year to be analyzed by a large international collaboration. The processing is carried out on the Worldwide LHC Computing Grid, that spans over more than 170 compute centers around the world and is used by a number of particle physics experiments. Recently the LHC experiments were encouraged to make increasing use of HPC resources. While Grid resources are homogeneous with respect to the used Grid middleware, HPC installations can be very different in their setup. In order to integrate HPC resources into the highly automatized processing setups of the CMS experiment a number of challenges need to be addressed. For processing, access to primary data and metadata as well as access to the software is required. At Grid sites all this is achieved via a number of services that are provided by each center. However at HPC sites many of these capabilities cannot be easily provided and have to be enabled in the user space or enabled by other means. At HPC centers there are often restrictions regarding network access to remote services, which is again a severe limitation. The paper discusses a number of solutions and recent experiences by the CMS experiment to include HPC resources in processing campaigns.
During 2014, the CMS Offline and Computing Organization completed the necessary changes to use the CMS threaded framework in the full production environment. We will briefly discuss the design of the ...CMS Threaded Framework, in particular how the design affects scaling performance. We will then cover the effort involved in getting both the CMSSW application software and the workflow management system ready for using multiple threads for production. Finally, we will present metrics on the performance of the application and workflow system as well as the difficulties which were uncovered. We will end with CMS' plans for using the threaded framework to do production for LHC Run 2.
Abstract
Background
Omadacycline is an aminomethylcycline antibiotic in the tetracycline class that was approved by the US FDA in 2018 for the treatment of community-acquired bacterial pneumonia and ...acute bacterial skin and skin structure infections. It is available in both IV and oral formulations. Omadacycline has broad-spectrum in vitro activity and clinical efficacy against infections caused by Gram-positive and Gram-negative pathogens. Omadacycline is being evaluated in a 3 month placebo-controlled Phase 2 clinical trial of oral omadacycline versus placebo in adults with non-tuberculous mycobacteria (NTM) pulmonary disease caused by Mycobacterium abscessus (NCT04922554).
Objectives
To determine if omadacycline has intracellular antimicrobial activity against NTM, bacteria that can cause chronic lung disease, in an ex vivo model of intracellular infection.
Methods
Two strains of M. abscessus were used to infect THP-1 macrophages. Intracellular M. abscessus was then challenged with omadacycline and control antibiotics at multiples of the MIC over time to evaluate intracellular killing.
Results
At 16 × the MIC at 72 h, omadacycline treatment of intracellular NTM yielded a log10 reduction in cfu of 1.1 (91.74% reduction in cfu) and 1.6 (97.65% reduction in cfu) consistent with killing observed with tigecycline, whereas amikacin and clarithromycin at 16 × the MIC did not show any reduction in cfu against the intracellular M. abscessus.
Conclusions
Omadacycline displayed intracellular activity against M. abscessus within macrophages. The activity was similar to that of tigecycline; as expected, intracellular killing was not observed with clarithromycin and amikacin.
Experience in using commercial clouds in CMS Bauerdick, L; Bockelman, B; Dykstra, D ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
5
Journal Article
Recenzirano
Odprti dostop
Historically high energy physics computing has been performed on large purpose-built computing systems. In the beginning there were single site computing facilities, which evolved into the Worldwide ...LHC Computing Grid (WLCG) used today. The vast majority of the WLCG resources are used for LHC computing and the resources are scheduled to be continuously used throughout the year. In the last several years there has been an explosion in capacity and capability of commercial and academic computing clouds. Cloud resources are highly virtualized and intended to be able to be flexibly deployed for a variety of computing tasks. There is a growing interest amongst the cloud providers to demonstrate the capability to perform large scale scientific computing. In this presentation we will discuss results from the CMS experiment using the Fermilab HEPCloud Facility, which utilized both local Fermilab resources and Amazon Web Services (AWS). The goal was to work with AWS through a matching grant to demonstrate a sustained scale approximately equal to half of the worldwide processing resources available to CMS. We will discuss the planning and technical challenges involved in organizing the most IO intensive CMS workflows on a large-scale set of virtualized resource provisioned by the Fermilab HEPCloud. We will describe the data handling and data management challenges. Also, we will discuss the economic issues and cost and operational efficiency comparison to our dedicated resources. At the end we will consider the changes in the working model of HEP computing in a domain with the availability of large scale resources scheduled at peak times.
The CMS Global Pool, based on HTCondor and glideinWMS, is the main computing resource provisioning system for all CMS workflows, including analysis, Monte Carlo production, and detector data ...reprocessing activities. The total resources at Tier-1 and Tier-2 grid sites pledged to CMS exceed 100,000 CPU cores, while another 50,000 to 100,000 CPU cores are available opportunistically, pushing the needs of the Global Pool to higher scales each year. These resources are becoming more diverse in their accessibility and configuration over time. Furthermore, the challenge of stably running at higher and higher scales while introducing new modes of operation such as multi-core pilots, as well as the chaotic nature of physics analysis workflows, places huge strains on the submission infrastructure. This paper details some of the most important challenges to scalability and stability that the CMS Global Pool has faced since the beginning of the LHC Run II and how they were overcome.
CMS will require access to more than 125k processor cores for the beginning of Run 2 in 2015 to carry out its ambitious physics program with more and higher complexity events. During Run1 these ...resources were predominantly provided by a mix of grid sites and local batch resources. During the long shut down cloud infrastructures, diverse opportunistic resources and HPC supercomputing centers were made available to CMS, which further complicated the operations of the submission infrastructure. In this presentation we will discuss the CMS effort to adopt and deploy the glideinWMS system as a common resource provisioning layer to grid, cloud, local batch, and opportunistic resources and sites. We will address the challenges associated with integrating the various types of resources, the efficiency gains and simplifications associated with using a common resource provisioning layer, and discuss the solutions found. We will finish with an outlook of future plans for how CMS is moving forward on resource provisioning for more heterogenous architectures and services.
The connection of diverse and sometimes non-Grid enabled resource types to the CMS Global Pool, which is based on HTCondor and glideinWMS, has been a major goal of CMS. These resources range in type ...from a high-availability, low latency facility at CERN for urgent calibration studies, called the CAF, to a local user facility at the Fermilab LPC, allocation-based computing resources at NERSC and SDSC, opportunistic resources provided through the Open Science Grid, commercial clouds, and others, as well as access to opportunistic cycles on the CMS High Level Trigger farm. In addition, we have provided the capability to give priority to local users of beyond WLCG pledged resources at CMS sites. Many of the solutions employed to bring these diverse resource types into the Global Pool have common elements, while some are very specific to a particular project. This paper details some of the strategies and solutions used to access these resources through the Global Pool in a seamless manner.
The CMS workload management system Cinquilli, M; Evans, D; Foulkes, S ...
Journal of physics. Conference series,
01/2012, Letnik:
396, Številka:
3
Journal Article
Recenzirano
Odprti dostop
CMS has started the process of rolling out a new workload management system. This system is currently used for reprocessing and Monte Carlo production with tests under way using it for user analysis. ...It was decided to combine, as much as possible, the production/processing, analysis and T0 codebases so as to reduce duplicated functionality and make best use of limited developer and testing resources. This system now includes central request submission and management (Request Manager); a task queue for parcelling up and distributing work (WorkQueue) and agents which process requests by interfacing with disparate batch and storage resources (WMAgent).