Abstract
Background
Chronic pain (CP) remains the second commonest reason for being off work. Tertiary return to work (RTW) interventions aim to improve psychological and physical capacity amongst ...workers already off sick. Their effectiveness for workers with CP is unclear.
Aims
To explore which tertiary interventions effectively promote RTW for CP sufferers.
Methods
We searched eight databases for randomized controlled trials evaluating the effectiveness of tertiary RTW interventions for CP sufferers. We employed the Cochrane Risk of Bias (ROB) and methodological quality assessment tools for all included papers. We synthesized findings narratively. Meta-analysis was not possible due to heterogeneity of study characteristics.
Results
We included 16 papers pertaining to 13 trials. The types, delivery format and follow-up schedules of RTW interventions varied greatly. Most treatments were multidisciplinary, comprising psychological, physical and workplace elements. Five trials reported that tertiary interventions with multidisciplinary elements promoted RTW for workers with CP compared to controls. We gave a high ROB rating for one or more assessment criteria to three out of the five successful intervention trials. Two had medium- and low-risk elements across all categories. One compared different intensity multidisciplinary treatment and one comprised work-hardening with a job coach. Seven trials found treatment effects for secondary outcomes but no RTW improvement.
Conclusions
There is no conclusive evidence to support any specific tertiary RTW intervention for workers with CP, but multidisciplinary efforts should be considered. Workers’ compensation is an important area for RTW policymakers to consider.
ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter (the Quark-Gluon Plasma at the CERN LHC (Large Hadron Collider). ALICE has ...been successfully collecting physics data in Run 2 since spring 2015. In parallel, preparations for a major upgrade of the computing system, called O2 (Online-Offline), scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements of the system is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. The ALFA framework will standardize and handle software related tasks such as readout, data transport, frame building, calibration, online reconstruction and more in the upgraded computing system. ALFA supports two data transport libraries: ZeroMQ and nanomsg. This paper discusses the efficiency of ALFA in terms of high throughput data transport. The tests were performed with multiple FLPs pushing data to multiple EPNs. The transfer was done using push-pull communication patterns and two socket configurations: bind, connect. The set of benchmarks was prepared to get the most performant results on each hardware setup. The paper presents the measurement process and final results - data throughput combined with computing resources usage as a function of block size. The high number of nodes and connections in the final set up may cause race conditions that can lead to uneven load balancing and poor scalability. The performed tests allow us to validate whether the traffic is distributed evenly over all receivers. It also measures the behaviour of the network in saturation and evaluates scalability from a 1-to-1 to a N-to-M solution.
MAD - Monitoring ALICE Dataflow Barroso, V Chibante; Costa, F; Grigoras, C ...
Journal of physics. Conference series,
01/2015, Volume:
664, Issue:
8
Journal Article
Peer reviewed
Open access
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). ...Following a successful Run 1, which ended in February 2013, the ALICE data acquisition (DAQ) entered a consolidation phase to prepare for Run 2 which will start in the beginning of 2015. A new software tool has been developed by the data acquisition project to improve the monitoring of the experiment's dataflow, from the data readout in the DAQ farm up to its shipment to CERN's main computer centre. This software, called ALICE MAD (Monitoring ALICE Dataflow), uses the MonALISA framework as core module to gather, process, aggregate and distribute monitoring values from the different processes running in the distributed DAQ farm. Data are not only pulled from the data sources to MAD but can also be pushed by dedicated data collectors or the data source processes. A large set of monitored metrics (from the backpressure status on the readout links to event counters in each of the DAQ nodes and aggregated data rates for the whole data acquisition) is needed to provide a comprehensive view of the DAQ status. MAD also injects alarms in the Orthos alarm system whenever abnormal conditions are detected. The MAD web-based GUI uses WebSockets to provide dynamic and on-time status displays for the ALICE shift crew. Designed as a widget-based system, MAD supports an easy integration of new visualization blocks and also customization of the information displayed to the shift crew based on the ALICE activities.
A Large Ion Collider Experiment (ALICE) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). The ...online Data Quality Monitoring (DQM) plays an essential role in the experiment operation by providing shifters with immediate feedback on the data being recorded in order to quickly identify and overcome problems. An immediate access to the DQM results is needed not only by shifters in the control room but also by detector experts worldwide. As a consequence, a new web application has been developed to dynamically display and manipulate the ROOT-based objects produced by the DQM system in a flexible and user friendly interface. The architecture and design of the tool, its main features and the technologies that were used, both on the server and the client side, are described. In particular, we detail how we took advantage of the most recent ROOT JavaScript I O and web server library to give interactive access to ROOT objects stored in a database. We describe as well the use of modern web techniques and packages such as AJAX, DHTMLX and jQuery, which has been instrumental in the successful implementation of a reactive and efficient application. We finally present the resulting application and how code quality was ensured. We conclude with a roadmap for future technical and functional developments.
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN LHC (Large Hadron Collider). ...ALICE has been successfully collecting physics data of Run 2 since spring 2015. In parallel, preparations for a major upgrade, called O2 (Online-Offline) and scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. To meet these requirements, the software and hardware layers of the new system need to be fully evaluated. In order to achieve a high performance to cost ratio three networking technologies (Ethernet, InfiniBand and Omni-Path) were benchmarked on Intel and IBM platforms. The core of the new transport layer will be based on a message queue library that supports push-pull and request-reply communication patterns and multipart messages. ZeroMQ and nanomsg are being evaluated as candidates and were tested in detail over the selected network technologies. This paper describes the benchmark programs and setups that were used during the tests, the significance of tuned kernel parameters, the configuration of network driver and the tuning of multi-core, multi-CPU, and NUMA (Non-Uniform Memory Access) architecture. It presents, compares and comments the final results. Eventually, it indicates the most efficient network technology and message queue library pair and provides an evaluation of the needed CPU and memory resources to handle foreseen traffic.
MAD - Monitoring ALICE Dataflow Carena, F.; Carena, W.; Chapeland, S. ...
2014 19th IEEE-NPSS Real Time Conference
Conference Proceeding
Open access
ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to study the physics of strongly interacting matter and the quark-gluon plasma at the CERN Large Hadron Collider (LHC). ...Following a successful Run 1 which ended in February 2013, the ALICE data acquisition (DAQ) entered a consolidation phase to prepare for Run 2 which will start in the beginning of 2015. One of the identified points for improvement was the monitoring of the experiment dataflow - from the data arrival on the DAQ farm via the readout links up to its shipment to CERN's main computer centre. To address this requirement, the ALICE MAD (Monitoring ALICE Dataflow) system was developed. MAD uses the MonALISA framework as core module to gather, process, aggregate and distribute monitoring values from the different processes running in the distributed DAQ farm. It allows for data not only to be pulled from the data sources but also to be pushed by dedicated data collectors or the data source processes themselves. To provide a complete view of the data acquisition status, the set of monitored metrics vary from the backpressure status on the readout links to event counters in each of the DAQ nodes and aggregated data rates for the whole data acquisition. MAD also injects alarms in the Orthos alarming facility whenever abnormal conditions are detected. To support the ALICE shift crew, MAD interfaces with a dedicated web-based GUI that uses WebSockets to provide dynamic and on-time status displays. Designed as a widget-based system, it allows not only an easy integration of new visualization blocks but also customization of the information displayed to the shift crew based on the ALICE activities.
A computer program named ANKAphase is presented that processes X‐ray inline phase‐contrast radiographs by reconstructing the projected thickness of the object(s) imaged. The program uses a ...single‐distance non‐iterative phase‐retrieval algorithm described by David Paganin et al. (2002), J. Microsc.206, 33–40. Allowing for non‐negligible absorption in the sample, this method is strictly valid only for monochromatic illumination and single‐material objects but tolerates deviations from these conditions, especially polychromaticity. ANKAphase is designed to be applied to tomography data (although it does not perform tomographic reconstruction itself). It can process series of images and perform flat‐field and dark‐field correction. Written in Java, ANKAphase has an intuitive graphical user interface and can be run either as a stand‐alone application or as a plugin to ImageJ, a widely used scientific image‐processing program. A description of ANKAphase is given and example applications are shown.
The ALICE Experiment was designed to study the physics of strongly interacting matter with heavy-ion collisions at the CERN LHC. A major upgrade of the detector and computing model (O
2
, ...Offline-Online) is currently ongoing. The ALICE O
2
farm will consist of almost 1000 nodes enabled to read out and process on-the-fly about 27 Tb/s of raw data. To efficiently operate the experiment and the O
2
facility a new monitoring system was developed. It will provide a complete overview of the overall health, detect performance degradation and component failures by collecting, processing, storing and visualising data from hardware and software sensors and probes. The core of the system is based on Apache Kafka ensuring high throughput, fault-tolerant and metric aggregation, processing with the help of Kafka Streams. In addition, Telegraf provides operating system sensors, InfluxDB is used as a time-series database, Grafana as a visualisation tool. The above tool selection evolved from the initial version where collectD was used instead of Telegraf, and Apache Flume together with Apache Spark instead of Apache Kafka.
A variety of workplace-based interventions exist to reduce stress and increase productivity. However, the efficacy of these interventions is sometimes unclear.
To determine whether complementary ...therapies offered in the workplace improve employee well-being.
We performed a systematic literature review which involved an electronic search of articles published between January 2000 and July 2015 from the databases Cochrane Central Register of Controlled Trials, PsycINFO, MEDLINE, AMED, CINAHL Plus, EMBASE and PubMed. We also undertook a manual search of all applicable article reference lists to ensure that no relevant studies were missed. We only selected published, full-length, English-language, peer-reviewed journal articles. Articles had to address the research objective using valid and reliable measures. We excluded articles concerning return to work or whose populations had been adversely affected by work resulting in the development of health issues.
We included 10 articles in the review from 131 identified. Mindfulness and meditation-based interventions were most effective in improving workplace health and work performance; the latter demonstrating some evidence of maintaining gains up to 3 months later. The evidence for relaxation interventions was inconclusive.
Mindfulness and meditation interventions may be helpful in improving both psychosocial workplace health and work performance, but long-term efficacy has yet to be fully determined.