The data-acquisition system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to the ...high-level trigger which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100GB/s originating from approximately 500 sources. An overview of the architecture and design of the hardware and software of the DAQ system is given. We discuss the performance and operational experience from the first months of LHC physics data taking.
The purpose of this study was to explore the discrepancy between teacher beliefs and behavior in a Problem-Based Learning (PBL) environment. Using a survey and observations, this study demonstrated ...that tutors prefer learner-oriented beliefs, but in their teacher behavior they showed a more traditional approach to teaching. Analysis of semi-structured interviews indicated that this inconsistency could be attributed to the way in which problem-based learning is embedded in the curriculum, the confidence teachers have in the self-directed capabilities of students, and the self-confidence of teachers regarding their own facilitation skills.
•Tutor beliefs do not predict tutor interventions.•Confidence in own facilitation skills explain tutor behavior.•Students' capabilities clarify tutor interventions.•Tutor behavior depends on the curriculum design.•The developed observation instrument distinguishes tutor behavior.
Since 1993, under pressure from the central government, asignificant part of the career guidance services in TheNetherlands has been marketised (i.e., has become a commercialisedbusiness). It is ...expected that such an operation will positivelyinfluence both the quality and quantity of the services provided.In this contribution will be examined whether this expectationhas been met or not. Have career guidance services becomeavailable to a greater number of people and has the quality ofthe services improved? The conclusion is that availability hasindeed increased but not as a consequence of marketisation andonly for those who can afford it financially. The quality of theservices is certainly not higher and may even be lower.PUBLICATION ABSTRACT
The error and alarm system for the data acquisition of the Compact Muon Solenoid (CMS) at CERN was successfully used for the physics runs at Large Hadron Collider (LHC) during first three years of ...activities. Error and alarm processing entails the notification, collection, storing and visualization of all exceptional conditions occurring in the highly distributed CMS online system using a uniform scheme. Alerts and reports are shown on-line by web application facilities that map them to graphical models of the system as defined by the user. A persistency service keeps a history of all exceptions occurred, allowing subsequent retrieval of user defined time windows of events for later playback or analysis. This paper describes the architecture and the technologies used and deals with operational aspects during the first years of LHC operation. In particular we focus on performance, stability, and integration with the CMS sub-detectors.
The CMS experiment at the LHC features a two-level trigger system. Events accepted by the first level trigger, at a maximum rate of 100 kHz, are read out by the Data Acquisition system (DAQ), and ...subsequently assembled in memory in a farm of computers running a software high-level trigger (HLT), which selects interesting events for offline storage and analysis at a rate of order few hundred Hz. The HLT algorithms consist of sequences of offline-style reconstruction and filtering modules, executed on a farm of 0(10000) CPU cores built from commodity hardware. Experience from the operation of the HLT system in the collider run 2010/2011 is reported. The current architecture of the CMS HLT, its integration with the CMS reconstruction framework and the CMS DAQ, are discussed in the light of future development. The possible short- and medium-term evolution of the HLT software infrastructure to support extensions of the HLT computing power, and to address remaining performance and maintenance issues, are discussed.
The data-acquisition (DAQ) system of the CMS experiment at the LHC performs the read-out and assembly of events accepted by the first level hardware trigger. Assembled events are made available to ...the high-level trigger (HLT), which selects interesting events for offline storage and analysis. The system is designed to handle a maximum input rate of 100 kHz and an aggregated throughput of 100 GB/s originating from approximately 500 sources and 10^8 electronic channels. An overview of the architecture and design of the hardware and software of the DAQ system is given. We report on the performance and operational experience of the DAQ and its Run Control System in the first two years of collider runs of the LHC, both in proton-proton and Pb-Pb collisions. We present an analysis of the current performance, its limitations, and the most common failure modes and discuss the ongoing evolution of the HLT capability needed to match the luminosity ramp-up of the LHC.
The CMS data acquisition system comprises O(20000) interdependent services that need to be monitored in near real-time. The ability to monitor a large number of distributed applications accurately ...and effectively is of paramount importance for robust operations. Application monitoring entails the collection of a large number of simple and composed values made available by the software components and hardware devices. A key aspect is that detection of deviations from a specified behaviour is supported in a timely manner, which is a prerequisite in order to take corrective actions efficiently. Given the size and time constraints of the CMS data acquisition system, efficient application monitoring is an interesting research problem. We propose an approach that uses the emerging paradigm of Web-service based eventing systems in combination with hierarchical data collection and load balancing. Scalability and efficiency are achieved by a decentralized architecture, splitting up data collections into regions of collections. An implementation following this scheme is deployed as the monitoring infrastructure of the CMS experiment at the Large Hadron Collider. All services in this distributed data acquisition system are providing standard web service interfaces via XML, SOAP and HTTP 15,22. Continuing on this path we adopted WS-* standards implementing a monitoring system layered on top of the W3C standards stack. We designed a load-balanced publisher/subscriber system with the ability to include high-speed protocols 10,12 for efficient data transmission 11,13,14 and serving data in multiple data formats.
The CMS data acquisition system is designed to build and filter events originating from 476 detector data sources at a maximum trigger rate of 100 kHz. Different architectures and switch technologies ...have been evaluated to accomplish this purpose. Events will be built in two stages: the first stage will be a set of event builders called front-end driver (FED) builders. These will be based on Myrinet technology and will pre-assemble groups of about eight data sources. The second stage will be a set of event builders called readout builders. These will perform the building of full events. A single readout builder will build events from about 60 sources of 16 kB fragments at a rate of 12.5 kHz. In this paper, we present the design of a readout builder based on TCP/IP over Gigabit Ethernet and the refinement that was required to achieve the design throughput. This refinement includes architecture of the readout builder, the setup of TCP/IP, and hardware selection.
Upgrade of the CMS Event Builder Bauer, G; Behrens, U; Bowen, M ...
Journal of physics. Conference series,
01/2012, Letnik:
396, Številka:
1
Journal Article
Recenzirano
Odprti dostop
The Data Acquisition system of the Compact Muon Solenoid experiment at CERN assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of 100 GB/s. By the time the LHC ...restarts after the 2013/14 shut-down, the current computing and networking infrastructure will have reached the end of their lifetime. This paper presents design studies for an upgrade of the CMS event builder based on advanced networking technologies such as 10/40 Gb/s Ethernet and Infiniband. The results of performance measurements with small-scale test setups are shown.
The CMS experiment at the LHC features over 2'500 devices that need constant monitoring in order to ensure proper data taking. The monitoring solution has been migrated from Nagios to Icinga, with ...several useful plugins. The motivations behind the migration and the selection of the plugins are discussed.