Background
Reducing the transmission of severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) is a global priority. Contact tracing identifies people who were recently in contact with an ...infected individual, in order to isolate them and reduce further transmission. Digital technology could be implemented to augment and accelerate manual contact tracing. Digital tools for contact tracing may be grouped into three areas: 1) outbreak response; 2) proximity tracing; and 3) symptom tracking. We conducted a rapid review on the effectiveness of digital solutions to contact tracing during infectious disease outbreaks.
Objectives
To assess the benefits, harms, and acceptability of personal digital contact tracing solutions for identifying contacts of an identified positive case of an infectious disease.
Search methods
An information specialist searched the literature from 1 January 2000 to 5 May 2020 in CENTRAL, MEDLINE, and Embase. Additionally, we screened the Cochrane COVID‐19 Study Register.
Selection criteria
We included randomised controlled trials (RCTs), cluster‐RCTs, quasi‐RCTs, cohort studies, cross‐sectional studies and modelling studies, in general populations. We preferentially included studies of contact tracing during infectious disease outbreaks (including COVID‐19, Ebola, tuberculosis, severe acute respiratory syndrome virus, and Middle East respiratory syndrome) as direct evidence, but considered comparative studies of contact tracing outside an outbreak as indirect evidence.
The digital solutions varied but typically included software (or firmware) for users to install on their devices or to be uploaded to devices provided by governments or third parties. Control measures included traditional or manual contact tracing, self‐reported diaries and surveys, interviews, other standard methods for determining close contacts, and other technologies compared to digital solutions (e.g. electronic medical records).
Data collection and analysis
Two review authors independently screened records and all potentially relevant full‐text publications. One review author extracted data for 50% of the included studies, another extracted data for the remaining 50%; the second review author checked all the extracted data. One review author assessed quality of included studies and a second checked the assessments. Our outcomes were identification of secondary cases and close contacts, time to complete contact tracing, acceptability and accessibility issues, privacy and safety concerns, and any other ethical issue identified. Though modelling studies will predict estimates of the effects of different contact tracing solutions on outcomes of interest, cohort studies provide empirically measured estimates of the effects of different contact tracing solutions on outcomes of interest. We used GRADE‐CERQual to describe certainty of evidence from qualitative data and GRADE for modelling and cohort studies.
Main results
We identified six cohort studies reporting quantitative data and six modelling studies reporting simulations of digital solutions for contact tracing. Two cohort studies also provided qualitative data. Three cohort studies looked at contact tracing during an outbreak, whilst three emulated an outbreak in non‐outbreak settings (schools). Of the six modelling studies, four evaluated digital solutions for contact tracing in simulated COVID‐19 scenarios, while two simulated close contacts in non‐specific outbreak settings.
Modelling studies
Two modelling studies provided low‐certainty evidence of a reduction in secondary cases using digital contact tracing (measured as average number of secondary cases per index case ‐ effective reproductive number (R eff)). One study estimated an 18% reduction in R eff with digital contact tracing compared to self‐isolation alone, and a 35% reduction with manual contact‐tracing. Another found a reduction in R eff for digital contact tracing compared to self‐isolation alone (26% reduction) and a reduction in R eff for manual contact tracing compared to self‐isolation alone (53% reduction). However, the certainty of evidence was reduced by unclear specifications of their models, and assumptions about the effectiveness of manual contact tracing (assumed 95% to 100% of contacts traced), and the proportion of the population who would have the app (53%).
Cohort studies
Two cohort studies provided very low‐certainty evidence of a benefit of digital over manual contact tracing. During an Ebola outbreak, contact tracers using an app found twice as many close contacts per case on average than those using paper forms. Similarly, after a pertussis outbreak in a US hospital, researchers found that radio‐frequency identification identified 45 close contacts but searches of electronic medical records found 13. The certainty of evidence was reduced by concerns about imprecision, and serious risk of bias due to the inability of contact tracing study designs to identify the true number of close contacts.
One cohort study provided very low‐certainty evidence that an app could reduce the time to complete a set of close contacts. The certainty of evidence for this outcome was affected by imprecision and serious risk of bias. Contact tracing teams reported that digital data entry and management systems were faster to use than paper systems and possibly less prone to data loss.
Two studies from lower‐ or middle‐income countries, reported that contact tracing teams found digital systems simpler to use and generally preferred them over paper systems; they saved personnel time, reportedly improved accuracy with large data sets, and were easier to transport compared with paper forms. However, personnel faced increased costs and internet access problems with digital compared to paper systems.
Devices in the cohort studies appeared to have privacy from contacts regarding the exposed or diagnosed users. However, there were risks of privacy breaches from snoopers if linkage attacks occurred, particularly for wearable devices.
Authors' conclusions
The effectiveness of digital solutions is largely unproven as there are very few published data in real‐world outbreak settings. Modelling studies provide low‐certainty evidence of a reduction in secondary cases if digital contact tracing is used together with other public health measures such as self‐isolation. Cohort studies provide very low‐certainty evidence that digital contact tracing may produce more reliable counts of contacts and reduce time to complete contact tracing. Digital solutions may have equity implications for at‐risk populations with poor internet access and poor access to digital technology.
Stronger primary research on the effectiveness of contact tracing technologies is needed, including research into use of digital solutions in conjunction with manual systems, as digital solutions are unlikely to be used alone in real‐world settings. Future studies should consider access to and acceptability of digital solutions, and the resultant impact on equity. Studies should also make acceptability and uptake a primary research question, as privacy concerns can prevent uptake and effectiveness of these technologies.
Abstract
This manuscript explores the question of the seasonality of severe acute respiratory syndrome coronavirus 2 by reviewing 4 lines of evidence related to viral viability, transmission, ...ecological patterns, and observed epidemiology of coronavirus disease 2019 in the Southern Hemispheres’ summer and early fall.
CLINICAL QUESTION Does treating the HIV-infected partner in a serodiscordant couple reduce the risk of HIV transmission to the uninfected partner? BOTTOM LINE Compared with serodiscordant couples ...without treatment, couples in which the infected partner is treated with antiretroviral therapy have a lower risk of HIV transmission.
Industry-sponsored clinical drug studies are associated with publication of outcomes that favor the sponsor, even when controlling for potential bias in the methods used. However, the influence of ...sponsorship bias has not been examined in preclinical animal studies. We performed a meta-analysis of preclinical statin studies to determine whether industry sponsorship is associated with either increased effect sizes of efficacy outcomes and/or risks of bias in a cohort of published preclinical statin studies. We searched Medline (January 1966-April 2012) and identified 63 studies evaluating the effects of statins on atherosclerosis outcomes in animals. Two coders independently extracted study design criteria aimed at reducing bias, results for all relevant outcomes, sponsorship source, and investigator financial ties. The I(2) statistic was used to examine heterogeneity. We calculated the standardized mean difference (SMD) for each outcome and pooled data across studies to estimate the pooled average SMD using random effects models. In a priori subgroup analyses, we assessed statin efficacy by outcome measured, sponsorship source, presence or absence of financial conflict information, use of an optimal time window for outcome assessment, accounting for all animals, inclusion criteria, blinding, and randomization. The effect of statins was significantly larger for studies sponsored by nonindustry sources (-1.99; 95% CI -2.68, -1.31) versus studies sponsored by industry (-0.73; 95% CI -1.00, -0.47) (p value<0.001). Statin efficacy did not differ by disclosure of financial conflict information, use of an optimal time window for outcome assessment, accounting for all animals, inclusion criteria, blinding, and randomization. Possible reasons for the differences between nonindustry- and industry-sponsored studies, such as selective reporting of outcomes, require further study.
The effect that sponsorship has on publication rates or overall effect estimates in animal studies is unclear, though methodological biases are prevalent in animal studies of statins and there may be ...differences in efficacy estimates between industry and non-industry sponsored studies. In the present analysis, we evaluated the impact of funding source on publication bias in animal studies estimating the effect of statins on atherosclerosis and bone outcomes.
We conducted two independent systematic reviews and meta-analyses identifying animal studies evaluating the effect of statins on reducing the risk of atherosclerosis outcomes (n = 49) and increasing the likelihood of beneficial bone outcomes (n = 45). After stratifying the included studies within each systematic review by funding source, three separate analyses were employed to assess publication bias in these meta-analyses—funnel plots, Egger's Linear Regression, and the Trim and Fill methods.
We found potential evidence of publication bias, primarily in non-industry sponsored studies. In all 3 assessments of publication bias, we found evidence of publication bias in non-industry sponsored studies, while in industry-sponsored studies publication bias was not evident in funnel plots and Egger's regression tests. We also found that inadequate reporting of sponsorship in animal studies is still exceedingly common.
In meta-analyses assessing the effects of statins on atherosclerosis and bone outcomes in animal studies, we found evidence of publication bias, though small numbers of industry-sponsored studies limit the interpretation of the trim-and-fill results. This publication bias is more prominent in non-industry sponsored studies. Industry and non-industry funded researchers may have different incentives for publication. Industry may have a financial interest to publish all preclinical animal studies to maximize the success of subsequent trials in humans, whereas non-industry funded academics may prefer to publish high impact statistically significant results only. Differences in previously published effect estimates between industry- and non-industry sponsored animal studies may be partially explained by publication bias.
BackgroundCervical cancer deaths are disproportionately higher in developing countries depicting one of the most profound health disparities existing today and is ranked as the second most frequent ...cancer among women in Nigeria. The Human Papillomavirus (HPV) vaccine as a primary prevention strategy is not widely used in Nigeria. This study investigated perceived barriers to HPV vaccination in a Nigerian community, targeting health workers' perceptions. MethodsThis descriptive study captured responses from a cross-sectional, convenience sample of adult health workers within Anambra State, Nigeria. An anonymous 42-item survey with multiple validated scales was developed based on the Theory of Planned Behavior model and previous studies. The self-administered survey was distributed by research assistants at study sites within Anambra State which were identified through local constituents by the regional zones Adazi-Ani, Onitsha, and Awka. Data analyses were performed using Microsoft Excel for descriptive statistics and R software for the logistic regression, with a statistical significance level of 5%. Subgroup analysis was performed for the baseline knowledge questionnaire to determine if there were any differences in correct responses based on demographics such as: Institution type, profession, age, sex, religion and parental status. ResultsResponses were collected from 137 Nigerian health workers; 44% nurses, 14% physicians, 6% pharmacists and 31% other health workers. The majority of respondents were female (69%), between 18 and 39 years of age (78%), from urban settings (82%), and identified as having Christian religious beliefs (97%). The most significant barriers identified were lack of awareness (39%), vaccine availability (39%), and cost (13%). When asked baseline knowledge questions regarding HPV, females were more likely to answer incorrectly as compared to males. Significant differences were found for statements: (1) HPV is sexually transmitted (p = 0.008) and (2) HPV is an infection that only affects women (p = 0.004). ConclusionsPerceived barriers to HPV vaccination identified by Nigerian health workers include lack of awareness, vaccine availability/accessibility, cost, and concerns about acceptability. Ongoing efforts to subsidize vaccine costs, campaigns to increase awareness of HPV vaccine, and interventions to improve attainability could advance administration rates in Nigeria, and ultimately improve death rates due to cervical cancer in this population.
Cancer health disparities persist across the cancer care continuum despite decades of effort to eliminate them. Among the strategies currently used to address these disparities are multi-institution ...research initiatives that engage multiple stakeholders and change efforts. Endemic to the theory of change of such programs is the idea that collaboration—across institutions, research disciplines, and academic ranks—is necessary to improve outcomes. Despite this emphasis on collaboration, however, it is not often a focus of evaluation for these programs and others like them. In this paper we describe a method for evaluating collaboration within the Meharry-Vanderbilt-Tennessee State University Cancer Partnership using network analysis. Specifically, we used network analysis of co-authorship on academic publications to visualize the growth and patterns of scientific collaboration across partnership institutions, research disciplines, and academic ranks over time. We presented the results of the network analysis to internal and external advisory groups, creating the opportunity to discuss partnership collaboration, celebrate successes, and identify opportunities for improvement. We propose that basic network analysis of existing data along with network visualizations can foster conversation and feedback and are simple and effective ways to evaluate collaboration initiatives.
Objective To evaluate whether the administration of hypotonic fluids compared with isotonic fluids is associated with a greater risk for hyponatremia in hospitalized children. Study design ...Informatics-enabled cohort study of all hospitalizations at Lucile Packard Children's Hospital between April 2009 and March 2011. Extraction and analysis of electronic medical record data identified normonatremic hospitalized children who received either hypotonic or isotonic intravenous maintenance fluids upon admission. The primary exposure was the administration of hypotonic maintenance fluids, and the primary outcome was the development of hyponatremia (serum sodium <135 mEq/L). Results A total of 1048 normonatremic children received either hypotonic (n = 674) or isotonic (n = 374) maintenance fluids upon admission. Hyponatremia developed in 260 (38.6%) children who received hypotonic fluids and 104 (27.8%) of those who received isotonic fluids (unadjusted OR 1.63; 95% CI 1.24-2.15, P < .001). After we controlled for intergroup differences and potential confounders, patients receiving hypotonic fluids remained more likely to develop hyponatremia (aOR 1.37, 95% CI 1.03-1.84). Multivariable analysis identified additional factors associated with the development of hyponatremia, including surgical admission (aOR 1.44, 95% CI 1.09-1.91), cardiac admitting diagnosis (aOR 2.08, 95% CI 1.34-3.20), and hematology/oncology admitting diagnosis (aOR 2.37, 95% CI 1.74-3.25). Conclusions Hyponatremia was common regardless of maintenance fluid tonicity; however, the administration of hypotonic maintenance fluids compared with isotonic fluids was associated with a greater risk of developing hospital-acquired hyponatremia. Additional clinical characteristics modified the hyponatremic effect of hypotonic fluid, and it is possible that optimal maintenance fluid therapy now requires a more individualized approach.