There is broad recognition of the importance of evidence in informing clinical decisions. When information from all studies included in a systematic review ("review") does not contribute to a ...meta-analysis, decision-makers can be frustrated. Our objectives were to use the field of eyes and vision as a case study and examine the extent to which authors of Cochrane reviews conducted meta-analyses for their review's pre-specified main outcome domain and the reasons that some otherwise eligible studies were not incorporated into meta-analyses.
We examined all completed systematic reviews published by Cochrane Eyes and Vision, as of August 11, 2017. We extracted information about each review's outcomes and, using an algorithm, categorized one outcome as its "main" outcome. We calculated the percentage of included studies incorporated into meta-analyses for any outcome and for the main outcome. We examined reasons for non-inclusion of studies into the meta-analysis for the main outcome.
We identified 175 completed reviews, of which 125 reviews included two or more studies. Across these 125 reviews, the median proportions of studies incorporated into at least one meta-analysis for any outcome and for the main outcome were 74% (interquartile range IQR 0-100%) and 28% (IQR 0-71%), respectively. Fifty-one reviews (41%) could not conduct a meta-analysis for the main outcome, mostly because fewer than two included studies measured the outcome (21/51 reviews) or the specific measurements for the outcome were inconsistent (16/51 reviews).
Outcome choice during systematic reviews can lead to few eligible studies included in meta-analyses. Core outcome sets and improved reporting of outcomes can help solve some of these problems.
Suboptimal overlap in outcomes reported in clinical trials and systematic reviews compromises efforts to compare and summarize results across these studies.
To examine the most frequent outcomes used ...in trials and reviews of the 4 most prevalent eye diseases (age-related macular degeneration AMD, cataract, diabetic retinopathy DR, and glaucoma) and the overlap between outcomes in the reviews and the trials included in the reviews.
This cross-sectional study examined all Cochrane reviews that addressed AMD, cataract, DR, and glaucoma; were published as of July 20, 2016; and included at least 1 trial and the trials included in the reviews. For each disease, a pair of clinical experts independently classified all outcomes and resolved discrepancies. Outcomes (outcome domains) were then compared separately for each disease.
Proportion of review outcomes also reported in trials and vice versa.
This study included 56 reviews that comprised 414 trials. Although the median number of outcomes per trial and per review was the same (n = 5) for each disease, the trials included a greater number of outcomes overall than did the reviews, ranging from 2.9 times greater (89 vs 30 outcomes for glaucoma) to 4.9 times greater (107 vs 22 outcomes for AMD). Most review outcomes, ranging from 14 of 19 outcomes (73.7%) (for DR) to 27 of 29 outcomes (93.1%) (for cataract), were also reported in the trials. For trial outcomes, however, the proportion also named in reviews was low, ranging from 19 of 107 outcomes (17.8%) (for AMD) to 24 of 89 outcomes (27.0%) (for glaucoma). Only 1 outcome (visual acuity) was consistently reported in greater than half the trials and greater than half the reviews.
Although most review outcomes were reported in the trials, most trial outcomes were not reported in the reviews. The current analysis focused on outcome domains, which might underestimate the problem of inconsistent outcomes. Other important elements of an outcome (ie, specific measurement, specific metric, method of aggregation, and time points) might have differed even though the domains overlapped. Inconsistency in trial outcomes may impede research synthesis and indicates the need for disease-specific core outcome sets in ophthalmology.
Clinical trials and systematic reviews of clinical trials inform healthcare decisions. There is growing concern, however, about results from clinical trials that cannot be reproduced. Reasons for ...nonreproducibility include that outcomes are defined in multiple ways, results can be obtained using multiple methods of analysis, and trial findings are reported in multiple sources ("multiplicity"). Multiplicity combined with selective reporting can influence dissemination of trial findings and decision-making. In particular, users of evidence might be misled by exposure to selected sources and overly optimistic representations of intervention effects. In this commentary, drawing from our experience in the Multiple Data Sources in Systematic Reviews (MUDS) study and evidence from previous research, we offer practical recommendations to enhance the reproducibility of clinical trials and systematic reviews.
Evidence-based healthcare (EBHC) principles are essential knowledge for patient and consumer ("consumer") engagement as research and research implementation stakeholders. The aim of this study was to ...assess whether participation in a free, self-paced online course affects confidence in explaining EBHC topics. The course comprises six modules and evaluations which together take about 6 h to complete.
Consumers United for Evidence-based Healthcare (CUE) designed, tested and implemented a free, online course for consumers, Understanding Evidence-based Healthcare: A Foundation for Action ("Understanding EBHC"). The course is offered through the Johns Hopkins Bloomberg School of Public Health. Participants rated their confidence in explaining EBHC topics on a scale of 1 (lowest) to 5 (highest), using an online evaluation provided before accessing the course ("Before") and after ("After") completing all six course modules. We analyzed data from those who registered for the course from May 31, 2007 to December 31, 2018 (n = 15,606), and among those persons, the 11,522 who completed the "Before" evaluation and 4899 who completed the "After" evaluation. Our primary outcome was the overall mean of within-person change ("overall mean change") in self-reported confidence levels on EBHC-related topics between "Before" and "After" evaluations among course completers. Our secondary outcomes were the mean within-person change for each of the 11 topics (mean change by topic).
From May 31, 2007 to December 31, 2018, 15,606 individuals registered for the course: 11,522 completed the "Before" evaluation, and 4899 of these completed the "After" evaluation (i.e., completed the course). The overall mean change in self-reported confidence levels (ranging from 1 to 5) from the "Before" to "After" evaluation was 1.27 (95% CI, 1.24-1.30). The mean change by topic ranged from 1.00 (95% CI, 0.96-1.03) to 1.90 (95% CI, 1.87-1.94).
Those who seek to involve consumer stakeholders can offer Understanding EBHC as a step toward meaningful consumer engagement. Future research should focus on long-term impact assessment of online course such as ours to understand whether confidence is retained post-course and applied appropriately.
Including results from unpublished randomized controlled trials (RCTs) in a systematic review may ameliorate the effect of publication bias in systematic review results. Unpublished RCTs are ...sometimes described in abstracts presented at conferences, included in trials registers, or both. Trial results may not be available in a trials register and abstracts describing RCT results often lack study design information. Complementary information from a trials register record may be sufficient to allow reliable inclusion of an unpublished RCT only available as an abstract in a systematic review.
We identified 496 abstracts describing RCTs presented at the 2007 to 2009 Association for Research in Vision and Ophthalmology (ARVO) meetings; 154 RCTs were registered in ClinicalTrials.gov. Two persons extracted verbatim primary and non-primary outcomes reported in the abstract and ClinicalTrials.gov record. We compared each abstract outcome with all ClinicalTrials.gov outcomes and coded matches as complete, partial, or no match.
We identified 800 outcomes in 152 abstracts (95 primary 51 abstracts and 705 141 abstracts non-primary outcomes). No outcomes were reported in 2 abstracts. Of 95 primary outcomes, 17 (18%) agreed completely, 53 (56%) partially, and 25 (26%) had no match with a ClinicalTrials.gov primary or non-primary outcome. Among 705 non-primary outcomes, 56 (8%) agreed completely, 205 (29%) agreed partially, and 444 (63%) had no match with a ClinicalTrials.gov primary or non-primary outcome. Among the 258 outcomes partially agreeing, we found additional information on the time when the outcome was measured more often in ClinicalTrials.gov than in the abstract (141/258 (55%) versus 55/258 (21%)). We found no association between the presence of non-matching "new" outcomes and year of registration, time to registry update, industry sponsorship, or multi-center status.
Conference abstracts may be a valuable source of information about results for outcomes of unpublished RCTs that have been registered in ClinicalTrials.gov. Complementary additional descriptive information may be present for outcomes reported in both sources. However, ARVO abstract authors also present outcomes not reported in ClinicalTrials.gov and these may represent analyses not originally planned.
Publication bias is the tendency on the parts of investigators, reviewers, and editors to submit or accept manuscripts for publication based on the direction or strength of the study findings. Much ...of what has been learned about publication bias comes from the social sciences, less from the field of medicine. In medicine, three studies have provided direct evidence for this bias. Prevention of publication bias is important both from the scientific perspective (complete dissemination of knowledge) and from the perspective of those who combine results from a number of similar studies (meta-analysis). If treatment decisions are based on the published literature, then the literature must include all available data that is of acceptable quality. Currently, obtaining information regarding all studies undertaken in a given field is difficult, even impossible. Registration of clinical trials, and perhaps other types of studies, is the direction in which the scientific community should move.
Abstract Background As the population ages, older adults are seeking meaningful, and impactful, post-retirement roles. As a society, improving the health of people throughout longer lives is a major ...public health goal. This paper presents the design and rationale for an effectiveness trial of Experience Corps™, an intervention created to address both these needs. This trial evaluates (1) whether senior volunteer roles within Experience Corps™ beneficially impact children's academic achievement and classroom behavior in public elementary schools and (2) impact on the health of volunteers. Methods Dual evaluations of (1) an intention-to-treat trial randomizing eligible adults 60 and older to volunteer service in Experience Corps™, or to a control arm of usual volunteering opportunities, and (2) a comparison of eligible public elementary schools receiving Experience Corps™ to matched, eligible control schools in a 1:1 control:intervention school ratio. Outcomes For older adults, the primary outcome is decreased disability in mobility and Instrumental Activities of Daily Living (IADL). Secondary outcomes are decreased frailty, falls, and memory loss; slowed loss of strength, balance, walking speed, cortical plasticity, and executive function; objective performance of IADLs; and increased social and psychological engagement. For children, primary outcomes are improved reading achievement and classroom behavior in Kindergarten through the 3rd grade; secondary outcomes are improvements in school climate, teacher morale and retention, and teacher perceptions of older adults. Summary This trial incorporates principles and practices of community-based participatory research and evaluates the dual benefit of a single intervention, versus usual opportunities, for two generations: older adults and children.