DIKUL - logo
E-viri
Recenzirano Odprti dostop
  • Prediction models for diagn...
    Wynants, Laure; Van Calster, Ben; Collins, Gary S; Riley, Richard D; Heinze, Georg; Schuit, Ewoud; Albu, Elena; Arshi, Banafsheh; Bellou, Vanesa; Bonten, Marc M J; Dahly, Darren L; Damen, Johanna A; Debray, Thomas P A; de Jong, Valentijn M T; De Vos, Maarten; Dhiman, Paula; Ensor, Joie; Gao, Shan; Haller, Maria C; Harhay, Michael O; Henckaerts, Liesbet; Heus, Pauline; Hoogland, Jeroen; Hudda, Mohammed; Jenniskens, Kevin; Kammer, Michael; Kreuzberger, Nina; Lohmann, Anna; Levis, Brooke; Luijken, Kim; Ma, Jie; Martin, Glen P; McLernon, David J; Navarro, Constanza L Andaur; Reitsma, Johannes B; Sergeant, Jamie C; Shi, Chunhu; Skoetz, Nicole; Smits, Luc J M; Snell, Kym I E; Sperrin, Matthew; Spijker, René; Steyerberg, Ewout W; Takada, Toshihiko; Tzoulaki, Ioanna; van Kuijk, Sander M J; van Bussel, Bas C T; van der Horst, Iwan C C; Reeve, Kelly; van Royen, Florien S; Verbakel, Jan Y; Wallisch, Christine; Wilkinson, Jack; Wolff, Robert; Hooft, Lotty; Moons, Karel G M; van Smeden, Maarten

    BMJ (Online), 04/2020, Letnik: 369
    Journal Article

    AbstractObjectiveTo review and appraise the validity and usefulness of published and preprint reports of prediction models for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital or dying with the disease.DesignLiving systematic review and critical appraisal by the covid-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.Data sourcesPubMed and Embase through Ovid, up to 17 February 2021, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.Study selectionStudies that developed or validated a multivariable covid-19 related prediction model.Data extractionAt least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).Results126 978 titles were screened, and 412 studies describing 731 new prediction models or validations were included. Of these 731, 125 were diagnostic models (including 75 based on medical imaging) and the remaining 606 were prognostic models for either identifying those at risk of covid-19 in the general population (13 models) or predicting diverse outcomes in those individuals with confirmed covid-19 (593 models). Owing to the widespread availability of diagnostic testing capacity after the summer of 2020, this living review has now focused on the prognostic models. Of these, 29 had low risk of bias, 32 had unclear risk of bias, and 545 had high risk of bias. The most common causes for high risk of bias were inadequate sample sizes (n=408, 67%) and inappropriate or incomplete evaluation of model performance (n=338, 56%). 381 models were newly developed, and 225 were external validations of existing models. The reported C indexes varied between 0.77 and 0.93 in development studies with low risk of bias, and between 0.56 and 0.78 in external validations with low risk of bias. The Qcovid models, the PRIEST score, Carr’s model, the ISARIC4C Deterioration model, and the Xie model showed adequate predictive performance in studies at low risk of bias. Details on all reviewed models are publicly available at https://www.covprecise.org/.ConclusionPrediction models for covid-19 entered the academic literature to support medical decision making at unprecedented speed and in large numbers. Most published prediction model studies were poorly reported and at high risk of bias such that their reported predictive performances are probably optimistic. Models with low risk of bias should be validated before clinical implementation, preferably through collaborative efforts to also allow an investigation of the heterogeneity in their performance across various populations and settings. Methodological guidance, as provided in this paper, should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction modellers should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.Systematic review registrationProtocol https://osf.io/ehc47/, registration https://osf.io/wy245.Readers’ noteThis article is the final version of a living systematic review that has been updated over the past two years to reflect emerging evidence. This version is update 4 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity.