Juvenile idiopathic arthritis (JIA), the most common chronic rheumatic disease of childhood, is characterised by synovitis. Clinical assessments of synovitis are imperfect, relying on composite and ...indirect measures of disease activity including clinician-reported measures, patient-reported measures and blood markers. Contrast-enhanced MRI is a more sensitive synovitis assessment technique but clinical utility is currently limited by availability and inter-observer variation. Improved quantitative MRI techniques may enable future development of more stringent MRI-defined remission criteria. The objective of this study was to determine the utility and feasibility of quantitative MRI measurement of synovial volume and vascularity in JIA before and twelve weeks after intra-articular glucocorticoid injection (IAGI) of the knee and to assess the acceptability of MRI to participating families.
Children and young people with JIA and a new episode of knee synovitis requiring IAGI were recruited from the Great North Children's Hospital in Newcastle upon Tyne. Quantitative contrast-enhanced MRI was performed prior to and twelve weeks after IAGI, in addition to standard clinical assessment tools, including the three-variable clinical juvenile arthritis disease activity score (cJADAS) and active joint count.
Eleven young people (5 male, median age 13 years, range 7-16) with JIA knee flare were recruited and 10 completed follow-up assessment. Following IAGI, the median (interquartile range) cJADAS improved from 8.5 (2.7) to 1.6 (3.9), whilst the median synovial volume improved from 38.5cm
(82.1cm
) to 0.0cm
(0.2cm
). Six patients presented with frank synovitis outside normal limits on routine MRI reporting. A further three had baseline MRI reports within normal limits but the quantitative measurements identified measurable synovial uptake. Post-IAGI quantitative measurements highlighted significant improvements in 9 patients.
IAGI led to a marked reduction in synovial volume, with quantitative MRI identifying more patients with an improved synovial volume than routine qualitative clinical reporting. Improvements in cJADAS scores were more variable with the patient/parent global assessment component contributing most to the scores. Further work is indicated, exploring the utility of quantitative MRI in the assessment of less accessible joints and comparing the impact of different treatment modalities.
While a majority of CMS data analysis activities rely on the distributed computing infrastructure on the WLCG Grid, dedicated local computing facilities have been deployed to address particular ...requirements in terms of latency and scale. The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and diagnosis, and high-interest physics analysis requiring fast turnaround. In order to reach the goal for fast turnaround tasks, the Workload Management group has designed a CRABServer based system to fit with two main needs: to provide a simple, familiar interface to the user (as used in the CRAB Analysis Tool7) and to allow an easy transition to the Tier-0 system. While the CRABServer component had been initially designed for Grid analysis by CMS end-users, with a few modifications it turned out to be also a very powerful service to manage and monitor local submissions on the CAF. Transition to Tier-0 has been guaranteed by the usage of the WMCore, a library developed by CMS to be the common core of workload management tools, for handing data driven workflow dependencies. This system is now being used with the first use cases, and important experience is being acquired. In addition to the CERN CAF facility, FNAL has CMS dedicated analysis resources at the FNAL LHC Physics Center (LPC). In the first few years of data collection FNAL has been able to accept a large fraction of CMS data. The remote centre is not well suited for the extremely low latency work expected of the CAF, but the presence of substantial analysis resources, a large resident community, and a large fraction of the data make the LPC a strong facility for resource intensive analysis. We present the building, commissioning and operation of these dedicated analysis facilities in the first year of LHC collisions; we also present the specific development to our software needed to allow for the use of these computing facilities in the special use cases of fast turnaround analyses.
The CMS CERN Analysis Facility (CAF) Buchmüller, O; Bonacorsi, D; Fanzago, F ...
Journal of physics. Conference series,
04/2010, Letnik:
219, Številka:
5
Journal Article
Recenzirano
Odprti dostop
The CMS CERN Analysis Facility (CAF) was primarily designed to host a large variety of latency-critical workflows. These break down into alignment and calibration, detector commissioning and ...diagnosis, and high-interest physics analysis requiring fast-turnaround. In addition to the low latency requirement on the batch farm, another mandatory condition is the efficient access to the RAW detector data stored at the CERN Tier-0 facility. The CMS CAF also foresees resources for interactive login by a large number of CMS collaborators located at CERN, as an entry point for their day-by-day analysis. These resources will run on a separate partition in order to protect the high-priority use-cases described above. While the CMS CAF represents only a modest fraction of the overall CMS resources on the WLCG GRID, an appropriately sized user-support service needs to be provided. We will describe the building, commissioning and operation of the CMS CAF during the year 2008. The facility was heavily and routinely used by almost 250 users during multiple commissioning and data challenge periods. It reached a CPU capacity of 1.4MSI2K and a disk capacity at the Peta byte scale. In particular, we will focus on the performances in terms of networking, disk access and job efficiency and extrapolate prospects for the upcoming LHC first year data taking. We will also present the experience gained and the limitations observed in operating such a large facility, in which well controlled workflows are combined with more chaotic type analysis by a large number of physicists.
The CMS Collaboration relies on 7 globally distributed Tier-1 computing centres located at large universities and national laboratories for a second custodial copy of the CMS RAW data and primary ...copy of the simulated data, data serving capacity to Tier-2 centres for analysis, and the bulk of the reprocessing and event selection capacity in the experiment. The Tier-1 sites have a challenging role in CMS because they are expected to ingest and archive data from both CERN and regional Tier-2 centres, while they export data to a global mesh of Tier-2s at rates comparable to the raw export data rate from CERN. The combined capacity of the Tier-1 centres is more than twice the resources located at CERN and efficiently utilizing this large distributed resources represents a challenge. In this article we will discuss the experience building, operating, and utilizing the CMS Tier-1 computing centres. We will summarize the facility challenges at the Tier-1s including the stable operations of CMS services, the ability to scale to large numbers of processing requests and large volumes of data, and the ability to provide custodial storage and high performance data serving. We will also present the operations experience utilizing the distributed Tier-1 centres from a distance: transferring data, submitting data serving requests, and submitting batch processing requests.
Complex scientific workflows can process large amounts of data using thousands of tasks. The turnaround times of these workflows are often affected by various latencies such as the resource ...discovery, scheduling and data access latencies for the individual workflow processes or actors. Minimizing these latencies will improve the overall execution time of a workflow and thus lead to a more efficient and robust processing environment. In this paper, we propose a pilot job concept that has intelligent data reuse and job execution strategies to minimize the scheduling, queuing, execution and data access latencies. The results have shown that significant improvements in the overall turnaround time of a workflow can be achieved with this approach. The proposed approach has been evaluated, first using the CMS Tier0 data processing workflow, and then simulating the workflows to evaluate its effectiveness in a controlled environment.
A system based on ROOT for handling the micro-DST of the
BaBar experiment is described. The purpose of the
Kanga system is to have micro-DST data available in a format well suited for data ...distribution within a world-wide collaboration with many small sites. The design requirements, implementation and experience in practice after three years of data taking by the
BaBar experiment are presented.
Gallium arsenide for vertex detectors D'Auria, S.; Bates, R.; Da Vià, C. ...
Nuclear instruments & methods in physics research. Section A, Accelerators, spectrometers, detectors and associated equipment,
02/1997, Letnik:
386, Številka:
1
Journal Article
Recenzirano
We give an overview of recent results in the development of GaAs detectors: they now have 100% charge collection efficiency with good reliability; they are bond-compatible with silicon detectors, ...with both strip and pixel geometry. New results on pixel detectors are reported as well as a short summary on the radiation hardness of SIU-GaAs detectors.
CMS computing operations during run 1 Adelman, J; Alderweireldt, S; Artieda, J ...
Journal of physics. Conference series,
01/2014, Letnik:
513, Številka:
3
Journal Article
Recenzirano
Odprti dostop
During the first run, CMS collected and processed more than 10B data events and simulated more than 15B events. Up to 100k processor cores were used simultaneously and 100PB of storage was managed. ...Each month petabytes of data were moved and hundreds of users accessed data samples. In this document we discuss the operational experience from this first run. We present the workflows and data flows that were executed, and we discuss the tools and services developed, and the operations and shift models used to sustain the system. Many techniques were followed from the original computing planning, but some were reactions to difficulties and opportunities. We also address the lessons learned from an operational perspective, and how this is shaping our thoughts for 2015.