NASA's risk classification system dates back to an era when every new NASA space mission was a one-of-a-kind build, and the only way to obtain reliability was as a by-product through a combination of ...reliability analyses, extensive and stringent quality requirements, and extensive testing. Originally, there were very limited commercial capabilities to develop systems to work reliably in space, so NASA considered its own homegrown approach the only recipe for success. This approach involved very detailed and prescriptive piece-part controls and no reliance on (and to some extent a rejection of) any type of commercial practices. Often risk was considered to be the lowest when NASA had the maximum amount of control and prescription, and the highest when commercial practices were largely employed, and these principles drove risk classification in the agency. Over time, however, commercial capabilities grew, and many products became standardized and commercialized, while the agency maintained its tried-and-true approach, paying little attention to the evolution of the commercial sector. In fact, the commercial sector was developing systems that have direct, proven reliability, established over time, while NASA still maintained the approach to ignore the reality of the commercialized aspects of standard products, label them as high risk, and attempt to change them to align with the agency's piece-part control practices. A table of mission classification vs lifetime for missions launched after 2000 indicates no correlation between lifetime and classification, with the few exceptions involving missions that have very limited objectives and no valid purpose to continue after they were met. This paper steps through some of the key historical elements in risk classification and NASA's overall approach to assurance, and presents some elements being brought forward to modernize the approach and take advantage of the growing capability in the commercial sector.
Defect models that are trained on class imbalanced datasets (i.e., the proportion of defective and clean modules is not equally represented) are highly susceptible to produce inaccurate prediction ...models. Prior research compares the impact of class rebalancing techniques on the performance of defect models but arrives at contradictory conclusions due to the use of different choice of datasets, classification techniques, and performance measures. Such contradictory conclusions make it hard to derive practical guidelines for whether class rebalancing techniques should be applied in the context of defect models. In this paper, we investigate the impact of class rebalancing techniques on the performance measures and interpretation of defect models. We also investigate the experimental settings in which class rebalancing techniques are beneficial for defect models. Through a case study of 101 datasets that span across proprietary and open-source systems, we conclude that the impact of class rebalancing techniques on the performance of defect prediction models depends on the used performance measure and the used classification techniques. We observe that the optimized SMOTE technique and the under-sampling technique are beneficial when quality assurance teams wish to increase AUC and Recall, respectively, but they should be avoided when deriving knowledge and understandings from defect models.
The integration of adaptive radiation therapy (ART), or modifying the treatment plan during the treatment course, is becoming more widely available in clinical practice. ART offers strong potential ...for minimizing treatment-related toxicity while escalating or de-escalating target doses based on the dose to organs at risk. Yet, ART workflows add complexity into the radiation therapy planning and delivery process that may introduce additional uncertainties. This work sought to review presently available ART workflows and technological considerations such as image quality, deformable image registration, and dose accumulation. Quality assurance considerations for ART components and minimum recommendations are described. Personnel and workflow efficiency recommendations are provided, as is a summary of currently available clinical evidence supporting the implementation of ART. Finally, to guide future clinical trial protocols, an example ART physician directive and a physics template following standard NRG Oncology protocol is provided.
Non-destructive testing (NDT) methods can be particularly valuable in assessing concrete quality at early ages as they are associated with reduced testing time and cost. A national study focusing on ...the potential use of NDT in quality assurance (QA) of concrete has recommended the adoption and/or use of such testing methods when these have low level of testing variability. Thus, objective of this study was to build on such recommendation and assess the response of specific well-developed and mature NDT methods in relation to their testing variability for detecting such production defects such as honeycombing and segregation. Recognizing the extensive knowledge and experience in assessing concrete with such methods over the years, the selected NDT methods considered were: ultrasonic pulse velocity (UPV); resonant frequency analysis (RFA); and, rebound hammer. Each of these NDT methods could be used for a specific assessment within QA as identified later on within the manuscript. The results indicated that indeed UPV is able to identify the presence of such defects with acceptable accuracy and repeatability. RFA also provided acceptable testing variability and thus can be used as complementary assessment to UPV in both lab and field-cured samples. The rebound hammer, as expected, was characterized with high testing variability and thus its use could be limited to a quick and only initial forensic assessment. Overall, the use of these NDT methods in QA will provide the opportunity to test a larger portion of concrete without a significant increase in QA cost and testing time.
Objective:
Outcome research has documented worsening among a minority of the patient population (5% to 10%). In this study, we conducted a meta-analytic and mega-analytic review of a psychotherapy ...quality assurance system intended to enhance outcomes in patients at risk of treatment failure.
Method:
Original data from six major studies conducted at a large university counseling center and a hospital outpatient setting (
N
= 6,151, mean age = 23.3 years, female = 63.2%, Caucasian = 85%) were reanalyzed to examine the effects of progress feedback on patient outcome. In this quality assurance system, the Outcome Questionnaire-45 was routinely administered to patients to monitor their therapeutic progress and was utilized as part of an early alert system to identify patients at risk of treatment failure. Patient progress feedback based on this alert system was provided to clinicians so that they could intervene before treatment failure occurred. Meta-analytic and mega-analytic approaches were applied in intent-to-treat and efficacy analyses of the effects of feedback interventions.
Results:
Three forms of feedback interventions-integral elements of this quality assurance system-were effective in enhancing treatment outcome, especially for signal alarm patients. Two of the three feedback interventions were also effective in preventing treatment failure (clinical support tools and the provision of patient progress feedback to therapists).
Conclusions:
The current state of evidence appears to support the efficacy and effectiveness of feedback interventions in enhancing treatment outcome.
For years I have promoted certification as a major contributor to higher quality within engineering organizations and I still maintain that it is essential to ensure an unbiased view of the knowledge ...and experience of all team members. What about overall lab quality though? How can overall lab quality be assured and that proper processes are being applied and followed? The simple answer is regular audits.
The use of radiochromic film (RCF) dosimetry in radiation therapy is extensive due to its high level of achievable accuracy for a wide range of dose values and its suitability under a variety of ...measurement conditions. However, since the publication of the 1998 AAPM Task Group 55, Report No. 63 on RCF dosimetry, the chemistry, composition, and readout systems for RCFs have evolved steadily. There are several challenges in using the new RCFs, readout systems and validation of the results depending on their applications. Accurate RCF dosimetry requires understanding of RCF selection, handling and calibration methods, calibration curves, dose conversion methods, correction methodologies as well as selection, operation and quality assurance (QA) programs of the readout systems. Acquiring this level of knowledge is not straight forward, even for some experienced users. This Task Group report addresses these issues and provides a basic understanding of available RCF models, dosimetric characteristics and properties, advantages and limitations, configurations, and overall elemental compositions of the RCFs that have changed over the past 20 yr. In addition, this report provides specific guidelines for data processing and analysis schemes and correction methodologies for clinical applications in radiation therapy.
Background Full implementation of safety checklists in surgery has been linked to improved outcomes and team effectiveness; however, reliable and standardized tools for assessing the quality of their ...use, which is likely to moderate their impact, are required. Study Design This was a multicenter prospective study. A standardized observational instrument, the “Checklist Usability Tool” (CUT), was developed to record precise characteristics relating to the use of the WHO's surgical safety checklist (SSC) at “time-out” and “sign-out” in a representative sample of 5 English hospitals. The CUT was used in real-time by trained assessors across general surgery, urology, and orthopaedic cases, including elective and emergency procedures. Results We conducted 565 and 309 observations of the time-out and sign-out, respectively. On average, two-thirds of the items were checked, team members were absent in more than 40% of cases, and they failed to pause or focus on the checks in more than 70% of cases. Information sharing could be improved across the entire operating room (OR) team. Sign-out was not completed in 39% of cases, largely due to uncertainty about when to conduct it. Large variation in checklist use existed between hospitals, but not between surgical specialties or between elective and emergency procedures. Surgical safety checklist performance was better when surgeons led and when all team members were present and paused. Conclusions We found large variation in WHO checklist use in a representative sample of English ORs. Measures sensitive to checklist practice quality, like CUT, will help identify areas for improvement in implementation and enable provision of comprehensive feedback to OR teams.