Summary The methods and results of health research are documented in study protocols, full study reports (detailing all analyses), journal reports, and participant-level datasets. However, protocols, ...full study reports, and participant-level datasets are rarely available, and journal reports are available for only half of all studies and are plagued by selective reporting of methods and results. Furthermore, information provided in study protocols and reports varies in quality and is often incomplete. When full information about studies is inaccessible, billions of dollars in investment are wasted, bias is introduced, and research and care of patients are detrimentally affected. To help to improve this situation at a systemic level, three main actions are warranted. First, academic institutions and funders should reward investigators who fully disseminate their research protocols, reports, and participant-level datasets. Second, standards for the content of protocols and full study reports and for data sharing practices should be rigorously developed and adopted for all types of health research. Finally, journals, funders, sponsors, research ethics committees, regulators, and legislators should endorse and enforce policies supporting study registration and wide availability of journal reports, full study reports, and participant-level datasets.
Network meta-analysis, in the context of a systematic review, is a meta-analysis in which multiple treatments (that is, three or more) are being compared using both direct comparisons of ...interventions within randomized controlled trials and indirect comparisons across trials based on a common comparator. To ensure validity of findings from network meta-analyses, the systematic review must be designed rigorously and conducted carefully. Aspects of designing and conducting a systematic review for network meta-analysis include defining the review question, specifying eligibility criteria, searching for and selecting studies, assessing risk of bias and quality of evidence, conducting a network meta-analysis, interpreting and reporting findings. This commentary summarizes the methodologic challenges and research opportunities for network meta-analysis relevant to each aspect of the systematic review process based on discussions at a network meta-analysis methodology meeting we hosted in May 2010 at the Johns Hopkins Bloomberg School of Public Health. Since this commentary reflects the discussion at that meeting, it is not intended to provide an overview of the field.
The tendency for authors to submit, and of journals to accept, manuscripts for publication based on the direction or strength of the study findings has been termed publication bias.
To assess the ...extent to which publication of a cohort of clinical trials is influenced by the statistical significance, perceived importance, or direction of their results.
We searched the Cochrane Methodology Register (The Cochrane Library Online Issue 2, 2007), MEDLINE (1950 to March Week 2 2007), EMBASE (1980 to Week 11 2007) and Ovid MEDLINE In-Process & Other Non-Indexed Citations (March 21 2007). We also searched the Science Citation Index (April 2007), checked reference lists of relevant articles and contacted researchers to identify additional studies.
Studies containing analyses of the association between publication and the statistical significance or direction of the results (trial findings), for a cohort of registered clinical trials.
Two authors independently extracted data. We classified findings as either positive (defined as results classified by the investigators as statistically significant (P < 0.05), or perceived as striking or important, or showing a positive direction of effect) or negative (findings that were not statistically significant (P >/= 0.05), or perceived as unimportant, or showing a negative or null direction in effect). We extracted information on other potential risk factors for failure to publish, when these data were available.
Five studies were included. Trials with positive findings were more likely to be published than trials with negative or null findings (odds ratio 3.90; 95% confidence interval 2.68 to 5.68). This corresponds to a risk ratio of 1.78 (95% CI 1.58 to 1.95), assuming that 41% of negative trials are published (the median among the included studies, range = 11% to 85%). In absolute terms, this means that if 41% of negative trials are published, we would expect that 73% of positive trials would be published.Two studies assessed time to publication and showed that trials with positive findings tended to be published after four to five years compared to those with negative findings, which were published after six to eight years. Three studies found no statistically significant association between sample size and publication. One study found no significant association between either funding mechanism, investigator rank, or sex and publication.
Trials with positive findings are published more often, and more quickly, than trials with negative findings.
To evaluate the characteristics of the design, analysis, and reporting of crossover trials for inclusion in a meta-analysis of treatment for primary open-angle glaucoma and to provide empirical ...evidence to inform the development of tools to assess the validity of the results from crossover trials and reporting guidelines.
We searched MEDLINE, EMBASE, and Cochrane's CENTRAL register for randomized crossover trials for a systematic review and network meta-analysis we are conducting. Two individuals independently screened the search results for eligibility and abstracted data from each included report.
We identified 83 crossover trials eligible for inclusion. Issues affecting the risk of bias in crossover trials, such as carryover, period effects and missing data, were often ignored. Some trials failed to accommodate the within-individual differences in the analysis. For a large proportion of the trials, the authors tabulated the results as if they arose from a parallel design. Precision estimates properly accounting for the paired nature of the design were often unavailable from the study reports; consequently, to include trial findings in a meta-analysis would require further manipulation and assumptions.
The high proportion of poorly reported analyses and results has the potential to affect whether crossover data should or can be included in a meta-analysis. There is pressing need for reporting guidelines for crossover trials.
Choice of outcomes is critical for clinical trialists and systematic reviewers. It is currently unclear how systematic reviewers choose and pre-specify outcomes for systematic reviews. Our objective ...was to assess the completeness of pre-specification and comparability of outcomes in all Cochrane reviews addressing four common eye conditions.
We examined protocols for all Cochrane reviews as of June 2013 that addressed glaucoma, cataract, age-related macular degeneration (AMD), and diabetic retinopathy (DR). We assessed completeness and comparability for each outcome that was named in ≥ 25% of protocols on those topics. We defined a completely-specified outcome as including information about five elements: domain, specific measurement, specific metric, method of aggregation, and time-points. For each domain, we assessed comparability in how individual elements were specified across protocols.
We identified 57 protocols addressing glaucoma (22), cataract (16), AMD (15), and DR (4). We assessed completeness and comparability for five outcome domains: quality-of-life, visual acuity, intraocular pressure, disease progression, and contrast sensitivity. Overall, these five outcome domains appeared 145 times (instances). Only 15/145 instances (10.3%) were completely specified (all five elements) (median = three elements per outcome). Primary outcomes were more completely specified than non-primary (median = four versus two elements). Quality-of-life was least completely specified (median = one element). Due to largely incomplete outcome pre-specification, conclusive assessment of comparability in outcome usage across the various protocols per condition was not possible.
Outcome pre-specification was largely incomplete; we encourage systematic reviewers to consider all five elements. This will indicate the importance of complete specification to clinical trialists, on whose work systematic reviewers depend, and will indirectly encourage comparable outcome choice to reviewers undertaking related research questions. Complete pre-specification could improve efficiency and reduce bias in data abstraction and analysis during a systematic review. Ultimately, more completely specified and comparable outcomes could make systematic reviews more useful to decision-makers.
Details about the type of analysis (e.g., intent to treat ITT) and definitions (i.e., criteria for including participants in the analysis) are necessary for interpreting a clinical trial's findings. ...Our objective was to compare the description of types of analyses and criteria for including participants in the publication (i.e., what was reported) with descriptions in the corresponding internal company documents (i.e., what was planned and what was done). Trials were for off-label uses of gabapentin sponsored by Pfizer and Parke-Davis, and documents were obtained through litigation.
For each trial, we compared internal company documents (protocols, statistical analysis plans, and research reports, all unpublished), with publications. One author extracted data and another verified, with a third person verifying discordant items and a sample of the rest. Extracted data included the number of participants randomized and analyzed for efficacy, and types of analyses for efficacy and safety and their definitions (i.e., criteria for including participants in each type of analysis). We identified 21 trials, 11 of which were published randomized controlled trials, and that provided the documents needed for planned comparisons. For three trials, there was disagreement on the number of randomized participants between the research report and publication. Seven types of efficacy analyses were described in the protocols, statistical analysis plans, and publications, including ITT and six others. The protocol or publication described ITT using six different definitions, resulting in frequent disagreements between the two documents (i.e., different numbers of participants were included in the analyses).
Descriptions of analyses conducted did not agree between internal company documents and what was publicly reported. Internal company documents provide extensive documentation of methods planned and used, and trial findings, and should be publicly accessible. Reporting standards for randomized controlled trials should recommend transparent descriptions and definitions of analyses performed and which study participants are excluded.
Network meta-analysis compares multiple treatment options for the same condition and may be useful for developing clinical practice guidelines.
To compare treatment recommendations for first-line ...medical therapy for primary open angle-glaucoma (POAG) from major updates of American Academy of Ophthalmology (AAO) guidelines with the evidence available at the time, using network meta-analysis.
MEDLINE, Embase, and the Cochrane Library were searched on 11 March 2014 for randomized, controlled trials (RCTs) of glaucoma monotherapies compared with placebo, vehicle, or no treatment or other monotherapies. The AAO Web site was searched in August 2014 to identify AAO POAG guidelines.
Eligible RCTs were selected by 2 independent reviewers, and guidelines were selected by 1 person.
One person abstracted recommendations from guidelines and a second person verified. Two people independently abstracted data from included RCTs.
Guidelines were grouped together on the basis of literature search dates, and RCTs that existed at 1991, 1995, 1999, 2004, and 2009 were analyzed. The outcome of interest was intraocular pressure (IOP) at 3 months. Only the latest guideline made a specific recommendation: prostaglandins. Network meta-analyses showed that all treatments were superior to placebo in decreasing IOP at 3 months. The mean reductions (95% credible intervals CrIs) for the highest-ranking class compared with placebo were as follows: 1991: β-blockers, 4.01 (CrI, 0.48 to 7.43); 1995: α2-adrenergic agonists, 5.64 (CrI, 1.73 to 9.50); 1999: prostaglandins, 5.43 (CrI, 3.38 to 7.38); 2004: prostaglandins, 4.75 (CrI, 3.11 to 6.44); 2009: prostaglandins, 4.58 (CrI, 2.94 to 6.24).
When comparisons are informed by a small number of studies, the treatment effects and rankings may not be stable.
For timely recommendations when multiple treatment options are available, guidelines developers should consider network meta-analysis.
National Eye Institute, National Institutes of Health.
Adverse events (AEs) in clinical trials may be reported in multiple sources. Different methods for reporting adverse events across trials or across sources for a single trial may produce inconsistent ...information about the adverse events associated with interventions.
We compared the methods authors use to decide which AEs to include in a particular source (i.e., "selection criteria"), including the number of different types of AEs reported (i.e., rather than the number of events). We compared sources (e.g., journal articles, clinical study reports (CSRs)) of trials for two drug-indications-gabapentin for neuropathic pain and quetiapine for bipolar depression. Electronic searches were completed in 2015. We identified selection criteria and assessed how criteria affected AE reporting.
We identified 21 gabapentin and 7 quetiapine trials. We found 6 gabapentin CSRs and 2 quetiapine CSRs, all written by drug manufacturers. All CSRs reported all AEs without applying selection criteria; by comparison, no other source reported all AEs, and 15/68 (22%) gabapentin sources and 19/48 (40%) quetiapine sources reported using selection criteria. Selection criteria greatly affected the number of AEs reported. For example, 67/316 (21%) AEs in one quetiapine trial met the criterion "occurring in ≥2% of participants in any treatment group," while only 5/316 (2%) AEs met the criterion "occurring in ≥10% of quetiapine-treated patients and twice as frequent in the quetiapine group as the placebo group."
Selection criteria for reporting AEs vary across trials and across sources for individual trials. If investigators do not pre-specify selection criteria, they might "cherry-pick" AEs based on results. Even if investigators pre-specify selection criteria, selective reporting will produce biased meta-analyses and clinical practice guidelines. Data about all AEs identified in clinical trials should be publicly available; however, sharing data will not solve all the problems identified in this study.