We use a dynamic specification to estimate the impact of trade on within-country income inequality in a sample of 65 developing countries (DCs) over the 1980–99 period. Our results suggest that trade ...with high income countries worsen income distribution in DCs, through both imports and exports. These findings provide support to the hypothesis that technological differentials and the skill biased nature of new technologies may be important factors in shaping the distributive effects of trade. Moreover, we observe that the previous results only hold for middle-income countries (MICs); we interpret this evidence by considering the greater potential for technological upgrading in MICs.
In the last three decades, HEP experiments have faced the challenge of manipulating larger and larger masses of data from increasingly complex, heterogeneous detectors with millions and then tens of ...millions of electronic channels. LHC experiments abandoned the monolithic architectures of the nineties in favor of a distributed approach, leveraging the appearence of high speed switched networks developed for digital telecommunication and the internet, and the corresponding increase of memory bandwidth available in off-the-shelf consumer equipment. This led to a generation of experiments where custom electronics triggers, analysing coarser-granularity "fast" data, are confined to the first phase of selection, where predictable latency and real time processing for a modest initial rate reduction are "a necessary evil". Ever more sophisticated algorithms are projected for use in HL- LHC upgrades, using tracker data in the low-level selection in high multiplicity environments, and requiring extremely complex data interconnects. These systems are quickly obsolete and inflexible but must nonetheless survive and be maintained across the extremely long life span of current detectors. New high-bandwidth bidirectional links could make high-speed low-power full readout at the crossing rate a possibility already in the next decade. At the same time, massively parallel and distributed analysis of unstructured data produced by loosely connected, "intelligent" sources has become ubiquitous in commercial applications, while the mass of persistent data produced by e.g. the LHC experiments has made multiple pass, systematic, end-to-end offline processing increasingly burdensome. A possible evolution of DAQ and trigger architectures could lead to detectors with extremely deep asynchronous or even virtual pipelines, where data streams from the various detector channels are analysed and indexed in situ quasi-real-time using intelligent, pattern-driven data organization, and the final selection is operated as a distributed "search for interesting event parts". A holistic approach is required to study the potential impact of these different developments on the design of detector readout, trigger and data acquisition systems in the next decades.
The CMS detector will undergo a significant upgrade to cope with the HL-LHC instantaneous luminosity and average number of proton–proton collisions per bunch crossing (BX). The Phase-2 CMS detector ...will be equipped with a new Level-1 (L1) trigger system that will have access to an unprecedented level of information. Advanced reconstruction algorithms will be deployed directly on the L1 FPGA-based processors, producing reconstructed physics primitives of quasi-offline quality. The latter will be collected and processed by the Level-1 trigger Data Scouting (L1DS) system at the full bunch crossing rate. Besides providing vast amounts of data for L1 and detector monitoring, the L1DS will perform quasi-online analysis in a heterogeneous computing farm. It is expected that the study of signatures too common to fit within the L1 acceptance budget, or orthogonal to the standard physics trigger selection strategies will greatly benefit from this approach. An L1DS prototype system has been set up to operate in the current LHC Run-3, with the main goal of demonstrating the basic principle and shape the development of the Phase-2 system. The Run-3 L1DS receives trigger primitives from the Global Muon and Calorimeter Trigger, the Global Trigger decision bits and the muon segments from the Barrel Muon Track Finder. FPGA boards acquire and aggregate the synchronous trigger data streams and perform basic data reduction, before sending the trigger primitives to a set of computing nodes through 100 Gbps Ethernet connections running a simplified firmware version of the TCP/IP protocol. An Intel TBB-based DAQ software receives the TCP/IP streams and applies further processing before the ingestion of the data into a cluster of servers running the CMS reconstruction framework. The output of the computing farm are data sets in the standard CMS data analysis format. This contribution presents the Run-3 L1DS demonstrator architecture and recent physics results extracted from the collected data.
The efficiency of the Data Acquisition (DAQ) of the Compact Muon Solenoid (CMS) experiment for LHC Run 2 is constantly being improved. A significant factor affecting the data taking efficiency is the ...experience of the DAQ operator. One of the main responsibilities of the DAQ operator is to carry out the proper recovery procedure in case of failure of data-taking. At the start of Run 2, understanding the problem and finding the right remedy could take a considerable amount of time (up to many minutes). Operators heavily relied on the support of on-call experts, also outside working hours. Wrong decisions due to time pressure sometimes lead to an additional overhead in recovery time. To increase the efficiency of CMS data-taking we developed a new expert system, the DAQExpert, which provides shifters with optimal recovery suggestions instantly when a failure occurs. DAQExpert is a web application analyzing frequently updating monitoring data from all DAQ components and identifying problems based on expert knowledge expressed in small, independent logic-modules written in Java. Its results are presented in real-time in the control room via a web-based GUI and a sound-system in a form of short description of the current failure, and steps to recover.
The CDF Silicon Vertex Trigger Ashmanskas, Bill; Barchiesi, A.; Bardi, A. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
02/2004, Letnik:
518, Številka:
1
Journal Article
Recenzirano
The Collider Detector at Fermilab (CDF) experiment's Silicon Vertex Trigger (SVT) is a system of 150 custom 9U VME boards that reconstructs axial tracks in the CDF silicon strip detector in a
15
μs
...pipeline. SVT's
35
μm
impact parameter resolution enables CDF's Level 2 trigger to distinguish primary and secondary particles, and hence to collect large samples of hadronic bottom and charm decays. We review some of SVT's key design features. Speed is achieved with custom VLSI pattern recognition, linearized track fitting, pipelining, and parallel processing. Testing and reliability are aided by built-in logic state analysis and test-data sourcing at each board's input and output, a common interboard data link, and a universal “Merger” board for data fan-in/fan-out. Speed and adaptability are enhanced by use of modern FPGAs.
The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 1034 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 ...interactions per crossing. In LS3, the CMS Detector will also undergo a major upgrade to prepare for the phase-2 of the LHC physics program, starting around 2025. The upgraded detector will be read out at an unprecedented data rate of up to 50 Tb/s and an event rate of 750 kHz. Complete events will be analysed by software algorithms running on standard processing nodes, and selected events will be stored permanently at a rate of up to 10 kHz for offline processing and analysis. In this paper we discuss the baseline design of the DAQ and HLT systems for the phase-2, taking into account the projected evolution of high speed network fabrics for event building and distribution, and the anticipated performance of general purpose CPU. Implications on hardware and infrastructure requirements for the DAQ "data center" are analysed. Emerging technologies for data reduction are considered. Novel possible approaches to event building and online processing, inspired by trending developments in other areas of computing dealing with large masses of data, are also examined. We conclude by discussing the opportunities offered by reading out and processing parts of the detector, wherever the front-end electronics allows, at the machine clock rate (40 MHz). This idea presents interesting challenges and its physics potential should be studied.
During Run-1 of the LHC, many operational procedures have been automated in the run control system of the Compact Muon Solenoid (CMS) experiment. When detector high voltages are ramped up or down or ...upon certain beam mode changes of the LHC, the DAQ system is automatically partially reconfigured with new parameters. Certain types of errors such as errors caused by single-event upsets may trigger an automatic recovery procedure. Furthermore, the top-level control node continuously performs cross-checks to detect sub-system actions becoming necessary because of changes in configuration keys, changes in the set of included front-end drivers or because of potential clock instabilities. The operator is guided to perform the necessary actions through graphical indicators displayed next to the relevant command buttons in the user interface. Through these indicators, consistent configuration of CMS is ensured. However, manually following the indicators can still be inefficient at times. A new assistant to the operator has therefore been developed that can automatically perform all the necessary actions in a streamlined order. If additional problems arise, the new assistant tries to automatically recover from these. With the new assistant, a run can be started from any state of the sub-systems with a single click. An ongoing run may be recovered with a single click, once the appropriate recovery action has been selected. We review the automation features of CMS Run Control and discuss the new assistant in detail including first operational experience.
Performance of the CMS Event Builder Andre, J-M; Behrens, U; Branson, J ...
Journal of physics. Conference series,
10/2017, Letnik:
898, Številka:
3
Journal Article
Recenzirano
Odprti dostop
The data acquisition system (DAQ) of the CMS experiment at the CERN Large Hadron Collider assembles events at a rate of 100 kHz, transporting event data at an aggregate throughput of O(100GB/s) to ...the high-level trigger farm. The DAQ architecture is based on state-of-the-art network technologies for the event building. For the data concentration, 10/40 Gbit/s Ethernet technologies are used together with a reduced TCP/IP protocol implemented in FPGA for a reliable transport between custom electronics and commercial computing hardware. A 56 Gbit/s Infiniband FDR Clos network has been chosen for the event builder. This paper presents the implementation and performance of the event-building system.
During the LHC Long Shutdown 1, the CMS Data Acquisition (DAQ) system underwent a partial redesign to replace obsolete network equipment, use more homogeneous switching technologies, and support new ...detector back-end electronics. The software and hardware infrastructure to provide input, execute the High Level Trigger (HLT) algorithms and deal with output data transport and storage has also been redesigned to be completely file- based. All the metadata needed for bookkeeping are stored in files as well, in the form of small documents using the JSON encoding. The Storage and Transfer System (STS) is responsible for aggregating these files produced by the HLT, storing them temporarily and transferring them to the T0 facility at CERN for subsequent offline processing. The STS merger service aggregates the output files from the HLT from ∼62 sources produced with an aggregate rate of ∼2GB s. An estimated bandwidth of 7GB s in concurrent read write mode is needed. Furthermore, the STS has to be able to store several days of continuous running, so an estimated of 250TB of total usable disk space is required. In this article we present the various technological and implementation choices of the three components of the STS: the distributed file system, the merger service and the transfer system.