Objective To assess the influence of trial sample size on treatment effect estimates within meta-analyses.Design Meta-epidemiological study.Data sources 93 meta-analyses (735 randomised controlled ...trials) assessing therapeutic interventions with binary outcomes, published in the 10 leading journals of each medical subject category of the Journal Citation Reports or in the Cochrane Database of Systematic Reviews.Data extraction Sample size, outcome data, and risk of bias extracted from each trial.Data synthesis Trials within each meta-analysis were sorted by their sample size: using quarters within each meta-analysis (from quarter 1 with 25% of the smallest trials, to quarter 4 with 25% of the largest trials), and using size groups across meta-analyses (ranging from <50 to ≥1000 patients). Treatment effects were compared within each meta-analysis between quarters or between size groups by average ratios of odds ratios (where a ratio of odds ratios less than 1 indicates larger effects in smaller trials). Results Treatment effect estimates were significantly larger in smaller trials, regardless of sample size. Compared with quarter 4 (which included the largest trials), treatment effects were, on average, 32% larger in trials in quarter 1 (which included the smallest trials; ratio of odds ratios 0.68, 95% confidence interval 0.57 to 0.82), 17% larger in trials in quarter 2 (0.83, 0.75 to 0.91), and 12% larger in trials in quarter 3 (0.88, 0.82 to 0.95). Similar results were obtained when comparing treatment effect estimates between different size groups. Compared with trials of 1000 patients or more, treatment effects were, on average, 48% larger in trials with fewer than 50 patients (0.52, 0.41 to 0.66) and 10% larger in trials with 500-999 patients (0.90, 0.82 to 1.00). Conclusions Treatment effect estimates differed within meta-analyses solely based on trial sample size, with stronger effect estimates seen in small to moderately sized trials than in the largest trials.
Objective To investigate the effect of the CONSORT for Abstracts guidelines, and different editorial policies used by five leading general medical journals to implement the guidelines, on the ...reporting quality of abstracts of randomised trials.Design Interrupted time series analysis.Sample We randomly selected up to 60 primary reports of randomised trials per journal per year from five high impact, general medical journals in 2006-09, if indexed in PubMed with an electronic abstract. We excluded reports that did not include an electronic abstract, and any secondary trial publications or economic analyses. We classified journals in three categories: those not mentioning the guidelines in their instructions to authors (JAMA and New England Journal of Medicine), those referring to the guidelines in their instructions to authors but with no specific policy to implement them (BMJ), and those referring to the guidelines in their instructions to authors with an active policy to implement them (Annals of Internal Medicine and Lancet). Two authors extracted data independently using the CONSORT for Abstracts checklist.Main outcome Mean number of CONSORT items reported in selected abstracts, among nine items reported in fewer than 50% of the abstracts published across the five journals in 2006.Results We assessed 955 reports of abstracts of randomised trials. Journals with an active policy to enforce the guidelines showed an immediate increase in the level of mean number of items reported (increase of 1.50 items; P=0.0037). At 23 months after publication of the guidelines, the mean number of items reported per abstract for the primary outcome was 5.41 of nine items, a 53% increase compared with the expected level estimated on the basis of pre-intervention trends. The change in level or trend did not increase in journals with no policy to enforce the guidelines (BMJ, JAMA, and New England Journal of Medicine).Conclusion Active implementation of the CONSORT for Abstracts guidelines by journals can lead to improvements in the reporting of abstracts of randomised trials.
Without a complete published description of interventions, clinicians and patients cannot reliably implement interventions that are shown to be useful, and other researchers cannot replicate or build ...on research findings. The quality of description of interventions in publications, however, is remarkably poor. To improve the completeness of reporting, and ultimately the replicability, of interventions, an international group of experts and stakeholders developed the Template for Intervention Description and Replication (TIDieR) checklist and guide. The process involved a literature review for relevant checklists and research, a Delphi survey of an international panel of experts to guide item selection, and a face to face panel meeting. The resultant 12 item TIDieR checklist (brief name, why, what (materials), what (procedure), who provided, how, where, when and how much, tailoring, modifications, how well (planned), how well (actual)) is an extension of the CONSORT 2010 statement (item 5) and the SPIRIT 2013 statement (item 11). While the emphasis of the checklist is on trials, the guidance is intended to apply across all evaluative study designs. This paper presents the TIDieR checklist and guide, with an explanation and elaboration for each item, and examples of good reporting. The TIDieR checklist and guide should improve the reporting of interventions and make it easier for authors to structure accounts of their interventions, reviewers and editors to assess the descriptions, and readers to use the information.
Abstract
Our aim was to describe the research practices of doctoral students facing a dilemma to research integrity and to assess the impact of inappropriate research environments, i.e. exposure to ...(a) a post-doctoral researcher who committed a Detrimental Research Practice (DRP) in a similar situation and (b) a supervisor who did not oppose the DRP. We conducted two 2-arm, parallel-group randomized controlled trials. We created 10 vignettes describing a realistic dilemma with two alternative courses of action (good practice versus DRP). 630 PhD students were randomized through an online system to a vignette (a) with (n = 151) or without (n = 164) exposure to a post-doctoral researcher; (b) with (n = 155) or without (n = 160) exposure to a supervisor. The primary outcome was a score from − 5 to + 5, where positive scores indicated the choice of DRP and negative scores indicated good practice. Overall, 37% of unexposed participants chose to commit DRP with important variation across vignettes (minimum 10%; maximum 66%). The mean difference 95%CI was 0.17 − 0.65 to 0.99;, p = 0.65 when exposed to the post-doctoral researcher, and 0.79 − 0.38; 1.94, p = 0.16, when exposed to the supervisor. In conclusion, we did not find evidence of an impact of postdoctoral researchers and supervisors on student research practices.
Trial registration:
NCT04263805, NCT04263506 (registration date 11 February 2020).
Blinding, or “masking,” is a crucial method for reducing bias in randomized clinical trials. In this paper, we review important methodological aspects of blinding, emphasizing terminology, reporting, ...bias mechanisms, empirical evidence, and the risk of unblinding. Theoretical considerations and empirical analyses support the blinding of patients, health‐care providers, and outcome assessors as to the trial intervention to which patients have been allocated. We encourage extensive pretrial testing of blinding procedures and explicit reporting of who was in the blinded condition and the methods used to ensure blinding.
Clinical Pharmacology & Therapeutics (2011); 90 5, 732–736. doi:10.1038/clpt.2011.207
To develop a checklist of items measuring the quality of reports of randomized clinical trials (RCTs) assessing nonpharmacological treatments (NPTs).
The Delphi consensus method was used to select ...and reduce the number of items in the checklist. A total of 154 individuals were invited to participate: epidemiologists and statisticians involved in the field of methodology of RCTs (
n = 55), members of the Cochrane Collaboration (
n = 41), and clinicians involved in planning NPT clinical trials (
n = 58). Participants ranked on a 10-point Likert scale whether an item should be included in the checklist.
Fifty-five experts (36%) participated in the survey. They were experienced in systematic reviews (68% were involved in the Cochrane Collaboration) and in planning RCTs (76%). Three rounds of the Delphi method were conducted to achieve consensus. The final checklist contains 10 items and 5 subitems, with items related to the standardization of the intervention, care provider influence, and additional measures to minimize the potential bias from lack of blinding of participants, care providers, and outcome assessors.
This tool can be used to critically appraise the medical literature, design NPT studies, and assess the quality of trial reports included in systematic reviews.