The clinical implications of SARS-CoV-2 infection are highly variable. Some people with SARS-CoV-2 infection remain asymptomatic, whilst the infection can cause mild to moderate COVID-19 and COVID-19 ...pneumonia in others. This can lead to some people requiring intensive care support and, in some cases, to death, especially in older adults. Symptoms such as fever, cough, or loss of smell or taste, and signs such as oxygen saturation are the first and most readily available diagnostic information. Such information could be used to either rule out COVID-19, or select patients for further testing. This is an update of this review, the first version of which published in July 2020.
To assess the diagnostic accuracy of signs and symptoms to determine if a person presenting in primary care or to hospital outpatient settings, such as the emergency department or dedicated COVID-19 clinics, has COVID-19.
For this review iteration we undertook electronic searches up to 15 July 2020 in the Cochrane COVID-19 Study Register and the University of Bern living search database. In addition, we checked repositories of COVID-19 publications. We did not apply any language restrictions.
Studies were eligible if they included patients with clinically suspected COVID-19, or if they recruited known cases with COVID-19 and controls without COVID-19. Studies were eligible when they recruited patients presenting to primary care or hospital outpatient settings. Studies in hospitalised patients were only included if symptoms and signs were recorded on admission or at presentation. Studies including patients who contracted SARS-CoV-2 infection while admitted to hospital were not eligible. The minimum eligible sample size of studies was 10 participants. All signs and symptoms were eligible for this review, including individual signs and symptoms or combinations. We accepted a range of reference standards.
Pairs of review authors independently selected all studies, at both title and abstract stage and full-text stage. They resolved any disagreements by discussion with a third review author. Two review authors independently extracted data and resolved disagreements by discussion with a third review author. Two review authors independently assessed risk of bias using the Quality Assessment tool for Diagnostic Accuracy Studies (QUADAS-2) checklist. We presented sensitivity and specificity in paired forest plots, in receiver operating characteristic space and in dumbbell plots. We estimated summary parameters using a bivariate random-effects meta-analysis whenever five or more primary studies were available, and whenever heterogeneity across studies was deemed acceptable.
We identified 44 studies including 26,884 participants in total. Prevalence of COVID-19 varied from 3% to 71% with a median of 21%. There were three studies from primary care settings (1824 participants), nine studies from outpatient testing centres (10,717 participants), 12 studies performed in hospital outpatient wards (5061 participants), seven studies in hospitalised patients (1048 participants), 10 studies in the emergency department (3173 participants), and three studies in which the setting was not specified (5061 participants). The studies did not clearly distinguish mild from severe COVID-19, so we present the results for all disease severities together. Fifteen studies had a high risk of bias for selection of participants because inclusion in the studies depended on the applicable testing and referral protocols, which included many of the signs and symptoms under study in this review. This may have especially influenced the sensitivity of those features used in referral protocols, such as fever and cough. Five studies only included participants with pneumonia on imaging, suggesting that this is a highly selected population. In an additional 12 studies, we were unable to assess the risk for selection bias. This makes it very difficult to judge the validity of the diagnostic accuracy of the signs and symptoms from these included studies. The applicability of the results of this review update improved in comparison with the original review. A greater proportion of studies included participants who presented to outpatient settings, which is where the majority of clinical assessments for COVID-19 take place. However, still none of the studies presented any data on children separately, and only one focused specifically on older adults. We found data on 84 signs and symptoms. Results were highly variable across studies. Most had very low sensitivity and high specificity. Only cough (25 studies) and fever (7 studies) had a pooled sensitivity of at least 50% but specificities were moderate to low. Cough had a sensitivity of 67.4% (95% confidence interval (CI) 59.8% to 74.1%) and specificity of 35.0% (95% CI 28.7% to 41.9%). Fever had a sensitivity of 53.8% (95% CI 35.0% to 71.7%) and a specificity of 67.4% (95% CI 53.3% to 78.9%). The pooled positive likelihood ratio of cough was only 1.04 (95% CI 0.97 to 1.11) and that of fever 1.65 (95% CI 1.41 to 1.93). Anosmia alone (11 studies), ageusia alone (6 studies), and anosmia or ageusia (6 studies) had sensitivities below 50% but specificities over 90%. Anosmia had a pooled sensitivity of 28.0% (95% CI 17.7% to 41.3%) and a specificity of 93.4% (95% CI 88.3% to 96.4%). Ageusia had a pooled sensitivity of 24.8% (95% CI 12.4% to 43.5%) and a specificity of 91.4% (95% CI 81.3% to 96.3%). Anosmia or ageusia had a pooled sensitivity of 41.0% (95% CI 27.0% to 56.6%) and a specificity of 90.5% (95% CI 81.2% to 95.4%). The pooled positive likelihood ratios of anosmia alone and anosmia or ageusia were 4.25 (95% CI 3.17 to 5.71) and 4.31 (95% CI 3.00 to 6.18) respectively, which is just below our arbitrary definition of a 'red flag', that is, a positive likelihood ratio of at least 5. The pooled positive likelihood ratio of ageusia alone was only 2.88 (95% CI 2.02 to 4.09). Only two studies assessed combinations of different signs and symptoms, mostly combining fever and cough with other symptoms. These combinations had a specificity above 80%, but at the cost of very low sensitivity (< 30%).
The majority of individual signs and symptoms included in this review appear to have very poor diagnostic accuracy, although this should be interpreted in the context of selection bias and heterogeneity between studies. Based on currently available data, neither absence nor presence of signs or symptoms are accurate enough to rule in or rule out COVID-19. The presence of anosmia or ageusia may be useful as a red flag for COVID-19. The presence of fever or cough, given their high sensitivities, may also be useful to identify people for further testing. Prospective studies in an unselected population presenting to primary care or hospital outpatient settings, examining combinations of signs and symptoms to evaluate the syndromic presentation of COVID-19, are still urgently needed. Results from such studies could inform subsequent management decisions.
Some people with SARS-CoV-2 infection remain asymptomatic, whilst in others the infection can cause mild to moderate COVID-19 disease and COVID-19 pneumonia, leading some patients to require ...intensive care support and, in some cases, to death, especially in older adults. Symptoms such as fever or cough, and signs such as oxygen saturation or lung auscultation findings, are the first and most readily available diagnostic information. Such information could be used to either rule out COVID-19 disease, or select patients for further diagnostic testing.
To assess the diagnostic accuracy of signs and symptoms to determine if a person presenting in primary care or to hospital outpatient settings, such as the emergency department or dedicated COVID-19 clinics, has COVID-19 disease or COVID-19 pneumonia.
On 27 April 2020, we undertook electronic searches in the Cochrane COVID-19 Study Register and the University of Bern living search database, which is updated daily with published articles from PubMed and Embase and with preprints from medRxiv and bioRxiv. In addition, we checked repositories of COVID-19 publications. We did not apply any language restrictions.
Studies were eligible if they included patients with suspected COVID-19 disease, or if they recruited known cases with COVID-19 disease and controls without COVID-19. Studies were eligible when they recruited patients presenting to primary care or hospital outpatient settings. Studies including patients who contracted SARS-CoV-2 infection while admitted to hospital were not eligible. The minimum eligible sample size of studies was 10 participants. All signs and symptoms were eligible for this review, including individual signs and symptoms or combinations. We accepted a range of reference standards including reverse transcription polymerase chain reaction (RT-PCR), clinical expertise, imaging, serology tests and World Health Organization (WHO) or other definitions of COVID-19.
Pairs of review authors independently selected all studies, at both title and abstract stage and full-text stage. They resolved any disagreements by discussion with a third review author. Two review authors independently extracted data and resolved disagreements by discussion with a third review author. Two review authors independently assessed risk of bias using the QUADAS-2 checklist. Analyses were descriptive, presenting sensitivity and specificity in paired forest plots, in ROC (receiver operating characteristic) space and in dumbbell plots. We did not attempt meta-analysis due to the small number of studies, heterogeneity across studies and the high risk of bias.
We identified 16 studies including 7706 participants in total. Prevalence of COVID-19 disease varied from 5% to 38% with a median of 17%. There were no studies from primary care settings, although we did find seven studies in outpatient clinics (2172 participants), and four studies in the emergency department (1401 participants). We found data on 27 signs and symptoms, which fall into four different categories: systemic, respiratory, gastrointestinal and cardiovascular. No studies assessed combinations of different signs and symptoms and results were highly variable across studies. Most had very low sensitivity and high specificity; only six symptoms had a sensitivity of at least 50% in at least one study: cough, sore throat, fever, myalgia or arthralgia, fatigue, and headache. Of these, fever, myalgia or arthralgia, fatigue, and headache could be considered red flags (defined as having a positive likelihood ratio of at least 5) for COVID-19 as their specificity was above 90%, meaning that they substantially increase the likelihood of COVID-19 disease when present. Seven studies carried a high risk of bias for selection of participants because inclusion in the studies depended on the applicable testing and referral protocols, which included many of the signs and symptoms under study in this review. Five studies only included participants with pneumonia on imaging, suggesting that this is a highly selected population. In an additional four studies, we were unable to assess the risk for selection bias. These factors make it very difficult to determine the diagnostic properties of these signs and symptoms from the included studies. We also had concerns about the applicability of these results, since most studies included participants who were already admitted to hospital or presenting to hospital settings. This makes these findings less applicable to people presenting to primary care, who may have less severe illness and a lower prevalence of COVID-19 disease. None of the studies included any data on children, and only one focused specifically on older adults. We hope that future updates of this review will be able to provide more information about the diagnostic properties of signs and symptoms in different settings and age groups.
The individual signs and symptoms included in this review appear to have very poor diagnostic properties, although this should be interpreted in the context of selection bias and heterogeneity between studies. Based on currently available data, neither absence nor presence of signs or symptoms are accurate enough to rule in or rule out disease. Prospective studies in an unselected population presenting to primary care or hospital outpatient settings, examining combinations of signs and symptoms to evaluate the syndromic presentation of COVID-19 disease, are urgently needed. Results from such studies could inform subsequent management decisions such as self-isolation or selecting patients for further diagnostic testing. We also need data on potentially more specific symptoms such as loss of sense of smell. Studies in older adults are especially important.
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) and the resulting COVID-19 pandemic present important diagnostic challenges. Several diagnostic strategies are available to identify or ...rule out current infection, identify people in need of care escalation, or to test for past infection and immune response. Point-of-care antigen and molecular tests to detect current SARS-CoV-2 infection have the potential to allow earlier detection and isolation of confirmed cases compared to laboratory-based diagnostic methods, with the aim of reducing household and community transmission.
To assess the diagnostic accuracy of point-of-care antigen and molecular-based tests to determine if a person presenting in the community or in primary or secondary care has current SARS-CoV-2 infection.
On 25 May 2020 we undertook electronic searches in the Cochrane COVID-19 Study Register and the COVID-19 Living Evidence Database from the University of Bern, which is updated daily with published articles from PubMed and Embase and with preprints from medRxiv and bioRxiv. In addition, we checked repositories of COVID-19 publications. We did not apply any language restrictions.
We included studies of people with suspected current SARS-CoV-2 infection, known to have, or not to have SARS-CoV-2 infection, or where tests were used to screen for infection. We included test accuracy studies of any design that evaluated antigen or molecular tests suitable for a point-of-care setting (minimal equipment, sample preparation, and biosafety requirements, with results available within two hours of sample collection). We included all reference standards to define the presence or absence of SARS-CoV-2 (including reverse transcription polymerase chain reaction (RT-PCR) tests and established clinical diagnostic criteria).
Two review authors independently screened studies and resolved any disagreements by discussion with a third review author. One review author independently extracted study characteristics, which were checked by a second review author. Two review authors independently extracted 2x2 contingency table data and assessed risk of bias and applicability of the studies using the QUADAS-2 tool. We present sensitivity and specificity, with 95% confidence intervals (CIs), for each test using paired forest plots. We pooled data using the bivariate hierarchical model separately for antigen and molecular-based tests, with simplifications when few studies were available. We tabulated available data by test manufacturer.
We included 22 publications reporting on a total of 18 study cohorts with 3198 unique samples, of which 1775 had confirmed SARS-CoV-2 infection. Ten studies took place in North America, two in South America, four in Europe, one in China and one was conducted internationally. We identified data for eight commercial tests (four antigen and four molecular) and one in-house antigen test. Five of the studies included were only available as preprints. We did not find any studies at low risk of bias for all quality domains and had concerns about applicability of results across all studies. We judged patient selection to be at high risk of bias in 50% of the studies because of deliberate over-sampling of samples with confirmed COVID-19 infection and unclear in seven out of 18 studies because of poor reporting. Sixteen (89%) studies used only a single, negative RT-PCR to confirm the absence of COVID-19 infection, risking missing infection. There was a lack of information on blinding of index test (n = 11), and around participant exclusions from analyses (n = 10). We did not observe differences in methodological quality between antigen and molecular test evaluations. Antigen tests Sensitivity varied considerably across studies (from 0% to 94%): the average sensitivity was 56.2% (95% CI 29.5 to 79.8%) and average specificity was 99.5% (95% CI 98.1% to 99.9%; based on 8 evaluations in 5 studies on 943 samples). Data for individual antigen tests were limited with no more than two studies for any test. Rapid molecular assays Sensitivity showed less variation compared to antigen tests (from 68% to 100%), average sensitivity was 95.2% (95% CI 86.7% to 98.3%) and specificity 98.9% (95% CI 97.3% to 99.5%) based on 13 evaluations in 11 studies of on 2255 samples. Predicted values based on a hypothetical cohort of 1000 people with suspected COVID-19 infection (with a prevalence of 10%) result in 105 positive test results including 10 false positives (positive predictive value 90%), and 895 negative results including 5 false negatives (negative predictive value 99%). Individual tests We calculated pooled results of individual tests for ID NOW (Abbott Laboratories) (5 evaluations) and Xpert Xpress (Cepheid Inc) (6 evaluations). Summary sensitivity for the Xpert Xpress assay (99.4%, 95% CI 98.0% to 99.8%) was 22.6 (95% CI 18.8 to 26.3) percentage points higher than that of ID NOW (76.8%, (95% CI 72.9% to 80.3%), whilst the specificity of Xpert Xpress (96.8%, 95% CI 90.6% to 99.0%) was marginally lower than ID NOW (99.6%, 95% CI 98.4% to 99.9%; a difference of -2.8% (95% CI -6.4 to 0.8)) AUTHORS' CONCLUSIONS: This review identifies early-stage evaluations of point-of-care tests for detecting SARS-CoV-2 infection, largely based on remnant laboratory samples. The findings currently have limited applicability, as we are uncertain whether tests will perform in the same way in clinical practice, and according to symptoms of COVID-19, duration of symptoms, or in asymptomatic people. Rapid tests have the potential to be used to inform triage of RT-PCR use, allowing earlier detection of those testing positive, but the evidence currently is not strong enough to determine how useful they are in clinical practice. Prospective and comparative evaluations of rapid tests for COVID-19 infection in clinically relevant settings are urgently needed. Studies should recruit consecutive series of eligible participants, including both those presenting for testing due to symptoms and asymptomatic people who may have come into contact with confirmed cases. Studies should clearly describe symptomatic status and document time from symptom onset or time since exposure. Point-of-care tests must be conducted on samples according to manufacturer instructions for use and be conducted at the point of care. Any future research study report should conform to the Standards for Reporting of Diagnostic Accuracy (STARD) guideline.
IntroductionThe Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis (TRIPOD) statement and the Prediction model Risk Of Bias ASsessment Tool (PROBAST) were ...both published to improve the reporting and critical appraisal of prediction model studies for diagnosis and prognosis. This paper describes the processes and methods that will be used to develop an extension to the TRIPOD statement (TRIPOD-artificial intelligence, AI) and the PROBAST (PROBAST-AI) tool for prediction model studies that applied machine learning techniques.Methods and analysisTRIPOD-AI and PROBAST-AI will be developed following published guidance from the EQUATOR Network, and will comprise five stages. Stage 1 will comprise two systematic reviews (across all medical fields and specifically in oncology) to examine the quality of reporting in published machine-learning-based prediction model studies. In stage 2, we will consult a diverse group of key stakeholders using a Delphi process to identify items to be considered for inclusion in TRIPOD-AI and PROBAST-AI. Stage 3 will be virtual consensus meetings to consolidate and prioritise key items to be included in TRIPOD-AI and PROBAST-AI. Stage 4 will involve developing the TRIPOD-AI checklist and the PROBAST-AI tool, and writing the accompanying explanation and elaboration papers. In the final stage, stage 5, we will disseminate TRIPOD-AI and PROBAST-AI via journals, conferences, blogs, websites (including TRIPOD, PROBAST and EQUATOR Network) and social media. TRIPOD-AI will provide researchers working on prediction model studies based on machine learning with a reporting guideline that can help them report key details that readers need to evaluate the study quality and interpret its findings, potentially reducing research waste. We anticipate PROBAST-AI will help researchers, clinicians, systematic reviewers and policymakers critically appraise the design, conduct and analysis of machine learning based prediction model studies, with a robust standardised tool for bias evaluation.Ethics and disseminationEthical approval has been granted by the Central University Research Ethics Committee, University of Oxford on 10-December-2020 (R73034/RE001). Findings from this study will be disseminated through peer-review publications.PROSPERO registration numberCRD42019140361 and CRD42019161764.
While the opportunities of ML and AI in healthcare are promising, the growth of complex data-driven prediction models requires careful quality and applicability assessment before they are applied and ...disseminated in daily practice. This scoping review aimed to identify actionable guidance for those closely involved in AI-based prediction model (AIPM) development, evaluation and implementation including software engineers, data scientists, and healthcare professionals and to identify potential gaps in this guidance. We performed a scoping review of the relevant literature providing guidance or quality criteria regarding the development, evaluation, and implementation of AIPMs using a comprehensive multi-stage screening strategy. PubMed, Web of Science, and the ACM Digital Library were searched, and AI experts were consulted. Topics were extracted from the identified literature and summarized across the six phases at the core of this review: (1) data preparation, (2) AIPM development, (3) AIPM validation, (4) software development, (5) AIPM impact assessment, and (6) AIPM implementation into daily healthcare practice. From 2683 unique hits, 72 relevant guidance documents were identified. Substantial guidance was found for data preparation, AIPM development and AIPM validation (phases 1-3), while later phases clearly have received less attention (software development, impact assessment and implementation) in the scientific literature. The six phases of the AIPM development, evaluation and implementation cycle provide a framework for responsible introduction of AI-based prediction models in healthcare. Additional domain and technology specific research may be necessary and more practical experience with implementing AIPMs is needed to support further guidance.
Systematic reviews of diagnostic test accuracy (DTA) studies are fundamental to the decision making process in evidence based medicine. Although such studies are regarded as high level evidence, ...these reviews are not always reported completely and transparently. Suboptimal reporting of DTA systematic reviews compromises their validity and generalisability, and subsequently their value to key stakeholders. An extension of the PRISMA (preferred reporting items for systematic review and meta-analysis) statement was recently developed to improve the reporting quality of DTA systematic reviews. The PRISMA-DTA statement has 27 items, of which eight are unmodified from the original PRISMA statement. This article provides an explanation for the 19 new and modified items, along with their meaning and rationale. Examples of complete reporting are used for each item to illustrate best practices.
Validation of prediction models is highly recommended and increasingly common in the literature. A systematic review of validation studies is therefore helpful, with meta-analysis needed to summarise ...the predictive performance of the model being validated across different settings and populations. This article provides guidance for researchers systematically reviewing and meta-analysing the existing evidence on a specific prediction model, discusses good practice when quantitatively summarising the predictive performance of the model across studies, and provides recommendations for interpreting meta-analysis estimates of model performance. We present key steps of the meta-analysis and illustrate each step in an example review, by summarising the discrimination and calibration performance of the EuroSCORE for predicting operative mortality in patients undergoing coronary artery bypass grafting.
Diagnostic accuracy studies are, like other clinical studies, at risk of bias due to shortcomings in design and conduct, and the results of a diagnostic accuracy study may not apply to other patient ...groups and settings. Readers of study reports need to be informed about study design and conduct, in sufficient detail to judge the trustworthiness and applicability of the study findings. The STARD statement (Standards for Reporting of Diagnostic Accuracy Studies) was developed to improve the completeness and transparency of reports of diagnostic accuracy studies. STARD contains a list of essential items that can be used as a checklist, by authors, reviewers and other readers, to ensure that a report of a diagnostic accuracy study contains the necessary information. STARD was recently updated. All updated STARD materials, including the checklist, are available at http://www.equator-network.org/reporting-guidelines/stard. Here, we present the STARD 2015 explanation and elaboration document. Through commented examples of appropriate reporting, we clarify the rationale for each of the 30 items on the STARD 2015 checklist, and describe what is expected from authors in developing sufficiently informative study reports.
Accurate rapid diagnostic tests for SARS-CoV-2 infection would be a useful tool to help manage the COVID-19 pandemic. Testing strategies that use rapid antigen tests to detect current infection have ...the potential to increase access to testing, speed detection of infection, and inform clinical and public health management decisions to reduce transmission. This is the second update of this review, which was first published in 2020.
To assess the diagnostic accuracy of rapid, point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. We consider accuracy separately in symptomatic and asymptomatic population groups. Sources of heterogeneity investigated included setting and indication for testing, assay format, sample site, viral load, age, timing of test, and study design.
We searched the COVID-19 Open Access Project living evidence database from the University of Bern (which includes daily updates from PubMed and Embase and preprints from medRxiv and bioRxiv) on 08 March 2021. We included independent evaluations from national reference laboratories, FIND and the Diagnostics Global Health website. We did not apply language restrictions.
We included studies of people with either suspected SARS-CoV-2 infection, known SARS-CoV-2 infection or known absence of infection, or those who were being screened for infection. We included test accuracy studies of any design that evaluated commercially produced, rapid antigen tests. We included evaluations of single applications of a test (one test result reported per person) and evaluations of serial testing (repeated antigen testing over time). Reference standards for presence or absence of infection were any laboratory-based molecular test (primarily reverse transcription polymerase chain reaction (RT-PCR)) or pre-pandemic respiratory sample.
We used standard screening procedures with three people. Two people independently carried out quality assessment (using the QUADAS-2 tool) and extracted study results. Other study characteristics were extracted by one review author and checked by a second. We present sensitivity and specificity with 95% confidence intervals (CIs) for each test, and pooled data using the bivariate model. We investigated heterogeneity by including indicator variables in the random-effects logistic regression models. We tabulated results by test manufacturer and compliance with manufacturer instructions for use and according to symptom status.
We included 155 study cohorts (described in 166 study reports, with 24 as preprints). The main results relate to 152 evaluations of single test applications including 100,462 unique samples (16,822 with confirmed SARS-CoV-2). Studies were mainly conducted in Europe (101/152, 66%), and evaluated 49 different commercial antigen assays. Only 23 studies compared two or more brands of test. Risk of bias was high because of participant selection (40, 26%); interpretation of the index test (6, 4%); weaknesses in the reference standard for absence of infection (119, 78%); and participant flow and timing 41 (27%). Characteristics of participants (45, 30%) and index test delivery (47, 31%) differed from the way in which and in whom the test was intended to be used. Nearly all studies (91%) used a single RT-PCR result to define presence or absence of infection. The 152 studies of single test applications reported 228 evaluations of antigen tests. Estimates of sensitivity varied considerably between studies, with consistently high specificities. Average sensitivity was higher in symptomatic (73.0%, 95% CI 69.3% to 76.4%; 109 evaluations; 50,574 samples, 11,662 cases) compared to asymptomatic participants (54.7%, 95% CI 47.7% to 61.6%; 50 evaluations; 40,956 samples, 2641 cases). Average sensitivity was higher in the first week after symptom onset (80.9%, 95% CI 76.9% to 84.4%; 30 evaluations, 2408 cases) than in the second week of symptoms (53.8%, 95% CI 48.0% to 59.6%; 40 evaluations, 1119 cases). For those who were asymptomatic at the time of testing, sensitivity was higher when an epidemiological exposure to SARS-CoV-2 was suspected (64.3%, 95% CI 54.6% to 73.0%; 16 evaluations; 7677 samples, 703 cases) compared to where COVID-19 testing was reported to be widely available to anyone on presentation for testing (49.6%, 95% CI 42.1% to 57.1%; 26 evaluations; 31,904 samples, 1758 cases). Average specificity was similarly high for symptomatic (99.1%) or asymptomatic (99.7%) participants. We observed a steady decline in summary sensitivities as measures of sample viral load decreased. Sensitivity varied between brands. When tests were used according to manufacturer instructions, average sensitivities by brand ranged from 34.3% to 91.3% in symptomatic participants (20 assays with eligible data) and from 28.6% to 77.8% for asymptomatic participants (12 assays). For symptomatic participants, summary sensitivities for seven assays were 80% or more (meeting acceptable criteria set by the World Health Organization (WHO)). The WHO acceptable performance criterion of 97% specificity was met by 17 of 20 assays when tests were used according to manufacturer instructions, 12 of which demonstrated specificities above 99%. For asymptomatic participants the sensitivities of only two assays approached but did not meet WHO acceptable performance standards in one study each; specificities for asymptomatic participants were in a similar range to those observed for symptomatic people. At 5% prevalence using summary data in symptomatic people during the first week after symptom onset, the positive predictive value (PPV) of 89% means that 1 in 10 positive results will be a false positive, and around 1 in 5 cases will be missed. At 0.5% prevalence using summary data for asymptomatic people, where testing was widely available and where epidemiological exposure to COVID-19 was suspected, resulting PPVs would be 38% to 52%, meaning that between 2 in 5 and 1 in 2 positive results will be false positives, and between 1 in 2 and 1 in 3 cases will be missed.
Antigen tests vary in sensitivity. In people with signs and symptoms of COVID-19, sensitivities are highest in the first week of illness when viral loads are higher. Assays that meet appropriate performance standards, such as those set by WHO, could replace laboratory-based RT-PCR when immediate decisions about patient care must be made, or where RT-PCR cannot be delivered in a timely manner. However, they are more suitable for use as triage to RT-PCR testing. The variable sensitivity of antigen tests means that people who test negative may still be infected. Many commercially available rapid antigen tests have not been evaluated in independent validation studies. Evidence for testing in asymptomatic cohorts has increased, however sensitivity is lower and there is a paucity of evidence for testing in different settings. Questions remain about the use of antigen test-based repeat testing strategies. Further research is needed to evaluate the effectiveness of screening programmes at reducing transmission of infection, whether mass screening or targeted approaches including schools, healthcare setting and traveller screening.