Hospital mortality, readmission and length of stay (LOS) are commonly used measures for quality of care. We aimed to disentangle the correlations between these interrelated measures and propose a new ...way of combining them to evaluate the quality of hospital care.
We analyzed administrative data from the Global Comparators Project from 26 hospitals on patients discharged between 2007 and 2012. We correlated standardized and risk-adjusted hospital outcomes on mortality, readmission and long LOS. We constructed a composite measure with 5 levels, based on literature review and expert advice, from survival without readmission and normal LOS (best) to mortality (worst outcome). This composite measure was analyzed using ordinal regression, to obtain a standardized outcome measure to compare hospitals.
Overall, we observed a 3.1% mortality rate, 7.8% readmission rate (in survivors) and 20.8% long LOS rate among 4,327,105 admissions. Mortality and LOS were correlated at the patient and the hospital level. A patient in the upper quartile LOS had higher odds of mortality (odds ratio = 1.45, 95% confidence interval 1.43-1.47) than those in the lowest quartile. Hospitals with a high standardized mortality had higher proportions of long LOS (r = 0.79, p < 0.01). Readmission rates did not correlate with either mortality or long LOS rates. The interquartile range of the standardized ordinal composite outcome was 74-117. The composite outcome had similar or better reliability in ranking hospitals than individual outcomes.
Correlations between different outcome measures are complex and differ between hospital- and patient-level. The proposed composite measure combines three outcomes in an ordinal fashion for a more comprehensive and reliable view of hospital performance than its component indicators.
Quality improvement (QI) projects often employ statistical process control (SPC) charts to monitor process or outcome measures as part of ongoing feedback, to inform successive Plan-Do-Study-Act ...cycles and refine the intervention (formative evaluation). SPC charts can also be used to draw inferences on effectiveness and generalisability of improvement efforts (summative evaluation), but only if appropriately designed and meeting specific methodological requirements for generalisability. Inadequate design decreases the validity of results, which not only reduces the chance of publication but could also result in patient harm and wasted resources if incorrect conclusions are drawn. This paper aims to bring together much of what has been written in various tutorials, to suggest a process for using SPC in QI projects. We highlight four critical decision points that are often missed, how these are inter-related and how they affect the inferences that can be drawn regarding effectiveness of the intervention: (1) the need for a stable baseline to enable drawing inferences on effectiveness; (2) choice of outcome measures to assess effectiveness, safety and intervention fidelity; (3) design features to improve the quality of QI projects; (4) choice of SPC analysis aligned with the type of outcome, and reporting on the potential influence of other interventions or secular trends.These decision points should be explicitly reported for readers to interpret and judge the results, and can be seen as supplementing the Standards for Quality Improvement Reporting Excellence guidelines. Thinking in advance about both formative and summative evaluation will inform more deliberate choices and strengthen the evidence produced by QI projects.
Hospital readmission rates are increasingly used for both quality improvement and cost control. However, the validity of readmission rates as a measure of quality of hospital care is not evident. We ...aimed to give an overview of the different methodological aspects in the definition and measurement of readmission rates that need to be considered when interpreting readmission rates as a reflection of quality of care.
We conducted a systematic literature review, using the bibliographic databases Embase, Medline OvidSP, Web-of-Science, Cochrane central and PubMed for the period of January 2001 to May 2013.
The search resulted in 102 included papers. We found that definition of the context in which readmissions are used as a quality indicator is crucial. This context includes the patient group and the specific aspects of care of which the quality is aimed to be assessed. Methodological flaws like unreliable data and insufficient case-mix correction may confound the comparison of readmission rates between hospitals. Another problem occurs when the basic distinction between planned and unplanned readmissions cannot be made. Finally, the multi-faceted nature of quality of care and the correlation between readmissions and other outcomes limit the indicator's validity.
Although readmission rates are a promising quality indicator, several methodological concerns identified in this study need to be addressed, especially when the indicator is intended for accountability or pay for performance. We recommend investing resources in accurate data registration, improved indicator description, and bundling outcome measures to provide a more complete picture of hospital care.
Avoiding low value care received increasing attention in many countries, as with the Choosing Wisely campaign and other initiatives to abandon care that wastes resources or delivers no benefit to ...patients. While an extensive literature characterises approaches to implementing evidence-based care, we have limited understanding of the process of de-implementation, such as abandoning existing low value practices. To learn more about the differences between implementation and de-implementation, we explored the literature and analysed data from two published studies (one implementation and one de-implementation) by the same orthopaedic surgeons. We defined 'leaders' as those orthopaedic surgeons who implemented, or de-implemented, the target processes of care and laggards as those who did not. Our findings suggest that leaders in implementation share some characteristics with leaders in de-implementation when comparing them with laggards, such as more open to new evidence, younger and less time in clinical practice. However, leaders in de-implementation and implementation differed in some other characteristics and were not the same persons. Thus, leading in implementation or de-implementation may depend to some degree on the type of intervention rather than entirely reflecting personal characteristics. De-implementation seemed to be hampered by motivational factors such as department priorities, and economic and political factors such as cost-benefit considerations in care delivery, whereas organisational factors were associated only with implementation. The only barrier or facilitator common to both implementation and de-implementation consisted of outcome expectancy (ie, the perceived net benefit to patients). Future studies need to test the hypotheses generated from this study and improve our understanding of differences between the processes of implementation and de-implementation in the people who are most likely to lead (or resist) these efforts.