Twenty Gram-negative bacterial (GNB) strains were selected based on the biodiversity previously observed in French traditional cheeses and their safety was assessed considering various safety ...criteria. For the majority of tested GNB strains, only gastric stress at pH 2 (vs pH 4) resulted in low survival and no regrowth after an additional simulated gastro-intestinal stress. Presence of milk was shown to be rarely protective. The majority of strains was resistant to human serum and had a low level of adherence to Caco-2 cells. When tested for virulence in Galleria mellonella larvae, GNB strains had LD 50 values similar to that of safe controls. However, four strains, Hafnia paralvei 920, Proteus sp. (close to P. hauseri) UCMA 3780, Providencia heimbachae GR4, and Morganella morganii 3A2A were highly toxic to the larvae, which suggests the presence of potential virulent factors in these strains. Noteworthy, to our knowledge, no foodborne intoxication or outbreak has been reported so far for any of the GNB belonging to the genera/species associated with the tested strains. The role of multiple dynamic interactions between cheese microbiota and GIT barriers could be key factors explaining safe consumption of the corresponding cheeses.
•Gram negative bacteria (GNB) from cheeses are susceptible to pH 2 contrary to pH4.•They have a low level of adherence to Caco-2 cells.•LD50 in Galleria mellonella larvae mainly reveals values similar to safe controls.•No foodborne intoxication or outbreak is reported so far for any of the GNB.•Cheese microbiota and GIT barriers are key factors explaining safe status of GNB.
This study was designed to evaluate the capacity of three Hafnia strains to inhibit the growth of an E. coli strain O26:H11 in an uncooked pressed model cheese, in the presence or absence of a ...microbial consortium added to mimic a cheese microbial community. Inoculated at 2logCFU/ml into pasteurized milk without Hafnia, the E. coli O26:H11 strain reached 5logCFU/g during cheese-making and survived at levels of 4 to 5logCFU/g beyond 40days. Inoculated into milk at 6logCFU/ml, all three tested Hafnia strains (H. alvei B16 and HA, H. paralvei 920) reached values close to 8logCFU/g and reduced E. coli O26:H11 counts in cheese on day 1 by 0.8 to 1.4logCFU/g compared to cheeses inoculated with E. coli O26:H11 and the microbial consortium only. The Hafnia strains slightly reduced counts of Enterococcus faecalis (~−0.5 log from day 1) and promoted Lactobacillus plantarum growth (+0.2 to 0.5 log from day 8) in cheese. They produced small amounts of putrescine (~1.3mmol/kg) and cadaverine (~0.9mmol/kg) in cheese after 28days, and did not affect levels of volatile aroma compounds. Further work on H. alvei strain B16 showed that E. coli O26:H11, inoculated at 2logCFU/ml, was inhibited by H. alvei B16 inoculated at 6logCFU/ml and not at 4.5logCFU/ml. The inhibition was associated neither with lower pH values in cheese after 6 or 24h, nor with higher concentrations of lactic acid. Enhanced concentrations of acetic acid on day 1 in cheese inoculated with H. alvei B16 (4 to 11mmol/kg) could not fully explain the reduction in E. coli O26:H11 growth. A synergistic interaction between H. alvei B16 and the microbial consortium, resulting in an additional 0.7-log reduction in E. coli O26:H11 counts, was observed from day 8 in model cheeses made from pasteurized milk. However, E. coli O26:H11 survived better during ripening in model cheeses made from raw milk than in those made from pasteurized milk, but this was not associated with an increase in pH values. In vitro approaches are required to investigate the mechanisms and causative agents of this interaction. H. alvei B16 appears to be a promising strain for reducing E. coli O26:H11 growth in cheese, as part of a multi-hurdle approach.
► H. alvei reduced the growth of E. coli O26:H11 in an uncooked pressed model cheese. ► E. coli O26:H11 at 102CFU/ml was inhibited by H. alvei inoculated at 106CFU/ml. ► Low amounts of acetic acid were associated with H. alvei. ► E. coli O26:H11 survival was affected by microbial communities during ripening. ► H. alvei B16 promising for helping to reduce E. coli O26:H11 growth in cheese
The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the ...experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of data-taking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 ...interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ "data center" are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.
Performance of the CMS Event Builder Andre, J-M; Behrens, U; Branson, J ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of O(100GB/s) to ...the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new ...detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB s. An estimated bandwidth of 7GB s in concurrent read write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or ...upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
Abstract Background Arthropathy of the knee is a frequent complication in patients with severe bleeding disorders leading to considerable pain and disability. Total knee arthroplasty (TKA) provides ...marked pain relief. However, a modest functional outcome and a high number of complications due to prosthetic infection and loosening are reported. Data on long-term outcomes are scarce, and most case series include few patients. We have studied clinical outcomes and complications of TKAs with special emphasis on prosthetic survival and periprosthetic infection. Methods A consecutive series of 107 TKAs in 74 patients with haemophilic arthropathy were retrospectively reviewed. Follow-up was mean 11.2 years (range 0.8–33.1 years). Results Five- and 10-year survival rates, with component removal for any reason as the end point, were 92% and 88%, respectively. Twenty-eight TKAs were removed after median 10 years (range 0.8–28 years). The most common cause of failure was aseptic loosening (14 knees) and periprosthetic infection (seven knees). The overall infection rate was 6.5%. The mean postoperative drop in haemoglobin levels was 4.3 g/dL (range 0.5–9.4) with a significant difference between haemophilia A patients with and without inhibitor (6.3 g/dL (range 3.6–9.4) versus 3.7 g/dL (range 0.5–8.1) (p < 0.001). A painless knee was reported in 93% of the TKAs at the latest follow-up. Conclusions The medium and long-term results of primary TKA in a large haemophilic population show good prosthetic survival at five and 10 years with an excellent relief of pain. Periprosthetic infection is still a major concern compared to the non-haemophilic population. Level of evidence Level IV.
A flexible monitoring system has been designed for the CMS File-based Filter Farm making use of modern data mining and analytics components. All the metadata and monitoring information concerning ...data flow and execution of the HLT are generated locally in the form of small documents using the JSON encoding. These documents are indexed into a hierarchy of elasticsearch (es) clusters along with process and system log information. Elasticsearch is a search server based on Apache Lucene. It provides a distributed, multitenant-capable search and aggregation engine. Since es is schema-free, any new information can be added seamlessly and the unstructured information can be queried in non-predetermined ways. The leaf es clusters consist of the very same nodes that form the Filter Farm thus providing natural horizontal scaling. A separate central" es cluster is used to collect and index aggregated information. The fine-grained information, all the way to individual processes, remains available in the leaf clusters. The central es cluster provides quasi-real-time high-level monitoring information to any kind of client. Historical data can be retrieved to analyse past problems or correlate them with external information. We discuss the design and performance of this system in the context of the CMS DAQ commissioning for LHC Run 2.
A New Event Builder for CMS Run II Albertsson, K; Andre, J-M; Andronidis, A ...
Journal of physics. Conference series,
12/2015, Letnik:
664, Številka:
8
Journal Article
Recenzirano
Odprti dostop
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider (LHC) assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100GB s to ...the high-level trigger (HLT) farm. The DAQ system has been redesigned during the LHC shutdown in 2013 14. The new DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10 40 Gbps Ethernet technologies are used together with a reduced TCP IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbps Infiniband FDR CLOS network has been chosen for the event builder. This paper discusses the software design, protocols, and optimizations for exploiting the hardware capabilities. We present performance measurements from small-scale prototypes and from the full-scale production system.