Over the period 1987–1991 an inter-disciplinary five-country group developed the EuroQol instrument, a five-dimensional three-level generic measure subsequently termed the ‘EQ-5D’. It was designed to ...measure and value health status. The salient features of its development and its consolidation and expansion are discussed. Initial expansion came, in particular, in the form of new language versions. Their development raised translation and semantic issues, experience with which helped feed into the design of two further instruments, the EQ-5D-5L and the youth version EQ-5D-Y. The expanded usage across clinical programmes, disease and condition areas, population surveys, patient-reported outcomes, and value sets is outlined. Valuation has been of continued relevance for the Group as this has allowed its instruments to be utilised as part of the economic appraisal of health programmes and their incorporation into health technology assessments. The future of the Group is considered in the context of: (1) its scientific strategy, (2) changes in the external environment affecting the demand for EQ-5D, and (3) a variety of issues it is facing in the context of the design of the instrument, its use in health technology assessment, and potential new uses for EQ-5D outside of clinical trials and technology appraisal.
Large, longitudinal, multi-center MR neuroimaging studies require comprehensive quality assurance (QA) protocols for assessing the general quality of the compiled data, indicating potential ...malfunctions in the scanning equipment, and evaluating inter-site differences that need to be accounted for in subsequent analyses.
We describe the implementation of a QA protocol for functional magnet resonance imaging (fMRI) data based on the regular measurement of an MRI phantom and an extensive variety of currently published QA statistics. The protocol is implemented in the MACS (Marburg-Münster Affective Disorders Cohort Study, http://for2107.de/), a two-center research consortium studying the neurobiological foundations of affective disorders. Between February 2015 and October 2016, 1214 phantom measurements have been acquired using a standard fMRI protocol. Using 444 healthy control subjects which have been measured between 2014 and 2016 in the cohort, we investigate the extent of between-site differences in contrast to the dependence on subject-specific covariates (age and sex) for structural MRI, fMRI, and diffusion tensor imaging (DTI) data.
We show that most of the presented QA statistics differ severely not only between the two scanners used for the cohort but also between experimental settings (e.g. hardware and software changes), demonstrate that some of these statistics depend on external variables (e.g. time of day, temperature), highlight their strong dependence on proper handling of the MRI phantom, and show how the use of a phantom holder may balance this dependence. Site effects, however, do not only exist for the phantom data, but also for human MRI data. Using T1-weighted structural images, we show that total intracranial (TIV), grey matter (GMV), and white matter (WMV) volumes significantly differ between the MR scanners, showing large effect sizes. Voxel-based morphometry (VBM) analyses show that these structural differences observed between scanners are most pronounced in the bilateral basal ganglia, thalamus, and posterior regions. Using DTI data, we also show that fractional anisotropy (FA) differs between sites in almost all regions assessed. When pooling data from multiple centers, our data show that it is a necessity to account not only for inter-site differences but also for hardware and software changes of the scanning equipment. Also, the strong dependence of the QA statistics on the reliable placement of the MRI phantom shows that the use of a phantom holder is recommended to reduce the variance of the QA statistics and thus to increase the probability of detecting potential scanner malfunctions.
•Quality assurance (QA) protocol for large, longitudinal, multi-center MR neuroimaging studies.•Dependence of QA statistics on MR-scanner type, hardware and software changes and external variables (e.g., time of day, temperature).•Consequences of phantom data variations for human MRI data.•Dependence of MR phantom placement on QA statistics.
Purpose:
Commercial CT-based image-guided radiotherapy (IGRT) systems allow widespread management of geometric variations in patient setup and internal organ motion. This document provides consensus ...recommendations for quality assurance protocols that ensure patient safety and patient treatment fidelity for such systems.
Methods:
The AAPM TG-179 reviews clinical implementation and quality assurance aspects for commercially available CT-based IGRT, each with their unique capabilities and underlying physics. The systems described are kilovolt and megavolt cone-beam CT, fan-beam MVCT, and CT-on-rails. A summary of the literature describing current clinical usage is also provided.
Results:
This report proposes a generic quality assurance program for CT-based IGRT systems in an effort to provide a vendor-independent program for clinical users. Published data from long-term, repeated quality control tests form the basis of the proposed test frequencies and tolerances.
Conclusion:
A program for quality control of CT-based image-guidance systems has been produced, with focus on geometry, image quality, image dose, system operation, and safety. Agreement and clarification with respect to reports from the AAPM TG-101, TG-104, TG-142, and TG-148 has been addressed.
Radiotherapy (RT) plan quality is critical in ensuring treatment efficacy. Poor quality RT can increase the risks of treatment failure, overall mortality and detrimentally impacting a patient's ...quality of life 1–4. This is especially important within RT clinical trials, where standardisation of treatment plan quality is paramount. However, widespread objective quantitative assessment of plan quality within trials is not performed routinely, leading to uncertainty on the magnitude of quality variations. Automated planning enables the possibility to efficiency and objectively assess the quality of individual clinical plans (CP) through comparison with an automatically generated standardised 'baseline’ plan. Utilising this innovative auditing methodology within a trial enables full quantitative characterisation of: (i) overall plan quality, (ii) potential outliers and (iii) variation solely due to planning practice. The aim of this study was to use fully automated planning to objectively assess plan quality within the Cancer Research UK funded (A25317) multi-centre international phase III trial PATHOS.
337 patients enrolled in the PATHOS clinical trial before 1st July 2021 were included in this study. 55 cases were excluded due to incomplete data and 16 for calibrating the automated solution, leaving 264 patients for analysis. 219 (83%) and 45 (17%) cases were treated with unilateral (Unilat) and bilateral (Bilat) volumes respectively. Planning was performed in alignment with the PATHOS protocol, with prescriptions of Bilat66Gy, Bilat60Gy, Unilat66Gy or Unilat60Gy in 30 fractions and Unilat50Gy in 25 fractions. Automated treatment plans (AP) were generated in RayStation using a locally developed 'Protocol Based Automatic Iterative Optimization’ automated planning solution 5. CP were quantitatively compared to AP across all the PATHOS trial metrics (including: Parotid Dmean; SpinalCord/BrainStem PRV D1cc; and PTV D98%, D2% and D50%) together with conformality (CI) and homogeneity (HI) indices. Analysis was performed with data categorised in terms of prescription and also tumour laterality. Statistical significance was assessed via a two-sided Wilcoxon matched-paired signed-rank test.
Display omitted Fig. 1 and Fig. 2 present a summary of the dosimetric results, categorised in terms of prescription. When comparing CP to the AP baseline (CP-AP), statistically significant (p≤0.05) differences, Δ, in median values were observed across most key metrics. For HI, small changes across all prescriptions were detected for the primary PTV with the largest Δ equalling (-0.012, p<0.001) for Unilat50Gy prescriptions. This indicated CP were marginally more homogeneous that the AP baseline. For CI, significant differences were observed across primary PTVs for three prescriptions (Unilat50Gy, Unilat60Gy and Bilat60Gy) and all secondary PTVs. Median differences were substantial, with a max Δ of +0.110 (p<0.001, Unilat66:PTV54), which represented a 10% increase in the volume treated to 54Gy for CP. When categorised in terms of tumour laterality, differences in contralateral Parotid (Parotid_CL) Dmean were small for Unilat (Δ=+2.2Gy, p<0.001) and moderate for Bilat cases (Δ=+3.5Gy, p<0.001). For ipsilateral Parotids (Parotid_IL), differences were substantial for Unilat cases (Δ=+4.8Gy, p<0.001) but nominally equivalent to Parotid_CL for Bilat (Δ=+3.1Gy, p<0.001).
At an individual patient level, AP baseline plans highlighted potential quality improvements that could have been realised for CP. For 50% of all patients, AP led to a reduction in Parotid_IL and Parotid_CL Dmean of between 4.4Gy-14.7Gy and 2.5Gy-8.9Gy respectively. In terms of conformality, for 50% of all patients AP reduced CI by between 0.06-0.35 and 0.08-0.28 for PTV60 and PTV54 respectively.
In terms of overall variation with the trial, Fig. 1 and Fig. 2 demonstrate that a high proportion of the variation observed in the majority dose metrics was a direct result of plan quality. For example, a standardised AP planning method would have reduced the inter-quartile range (IQR) for Parotid_CL Dmean from 5.4Gy to 1.4Gy, for HI (PTV54) from 0.031 to 0.015 and for CI (PTV54) from 0.194 to 0.071. Parotid_IL Dmean was a key exception, with similar IQRs for both AP and CP.
Clinics participating in PATHOS undergo a comprehensive quality assurance process prior to patient recruitment, with additional 'on trial’ qualitative reviews performed on small subset of patients. Furthermore, all patient plans must, where practicable, meet trial dose metric tolerances. Results of this study demonstrate that despite these procedures, which are common to many high-quality trials, meaningful variations in plan quality remain. Automated planning was found to be an effective tool in objectively assessing plan quality within a large trial. Implementation on a prospective basis could be a powerful QA tool to reduce this observed variation.
Microplastics can be present in the environment as manufactured microplastics (known as primary microplastics) or resulting from the continuous weathering of plastic litter, which yields ...progressively smaller plastic fragments (known as secondary microplastics). Herein, we discuss the numerous issues associated with the analysis of microplastics, and to a less extent of nanoplastics, in environmental samples (water, sediments, and biological tissues), from their sampling and sample handling to their identification and quantification. The analytical quality control and quality assurance associated with the validation of analytical methods and use of reference materials for the quantification of microplastics are also discussed, as well as the current challenges within this field of research and possible routes to overcome such limitations.
Display omitted
•Microplastics have been identified as environmental pollutants.•The sampling, sample handling, identification and quantification of microplastics were discussed.•The validation of analytical methods and use of reference materials for the microplastics quantification were highlighted.•The current challenges in these issues are identified.
NASA's risk classification system dates back to an era when every new NASA space mission was a one-of-a-kind build, and the only way to obtain reliability was as a by-product through a combination of ...reliability analyses, extensive and stringent quality requirements, and extensive testing. Originally, there were very limited commercial capabilities to develop systems to work reliably in space, so NASA considered its own homegrown approach the only recipe for success. This approach involved very detailed and prescriptive piece-part controls and no reliance on (and to some extent a rejection of) any type of commercial practices. Often risk was considered to be the lowest when NASA had the maximum amount of control and prescription, and the highest when commercial practices were largely employed, and these principles drove risk classification in the agency. Over time, however, commercial capabilities grew, and many products became standardized and commercialized, while the agency maintained its tried-and-true approach, paying little attention to the evolution of the commercial sector. In fact, the commercial sector was developing systems that have direct, proven reliability, established over time, while NASA still maintained the approach to ignore the reality of the commercialized aspects of standard products, label them as high risk, and attempt to change them to align with the agency's piece-part control practices. A table of mission classification vs lifetime for missions launched after 2000 indicates no correlation between lifetime and classification, with the few exceptions involving missions that have very limited objectives and no valid purpose to continue after they were met. This paper steps through some of the key historical elements in risk classification and NASA's overall approach to assurance, and presents some elements being brought forward to modernize the approach and take advantage of the growing capability in the commercial sector.