Blinding patients in clinical trials is a key methodological procedure, but the expected degree of bias due to nonblinded patients on estimated treatment effects is unknown.
Systematic review of ...randomized clinical trials with one sub-study (i.e. experimental vs control) involving blinded patients and another, otherwise identical, sub-study involving nonblinded patients. Within each trial, we compared the difference in effect sizes (i.e. standardized mean differences) between the sub-studies. A difference <0 indicates that nonblinded patients generated a more optimistic effect estimate. We pooled the differences with random-effects inverse variance meta-analysis, and explored reasons for heterogeneity.
Our main analysis included 12 trials (3869 patients). The average difference in effect size for patient-reported outcomes was -0.56 (95% confidence interval -0.71 to -0.41), (I(2)=60%, P=0.004), i.e. nonblinded patients exaggerated the effect size by an average of 0.56 standard deviation, but with considerable variation. Two of the 12 trials also used observer-reported outcomes, showing no indication of exaggerated effects due lack of patient blinding. There was a larger effect size difference in 10 acupuncture trials -0.63 (-0.77 to -0.49), than in the two non-acupuncture trials -0.17 (-0.41 to 0.07). Lack of patient blinding also increased attrition and use of co-interventions: ratio of control group attrition risk 1.79 (1.18 to 2.70), and ratio of control group co-intervention risk 1.55 (0.99 to 2.43).
This study provides empirical evidence of pronounced bias due to lack of patient blinding in complementary/alternative randomized clinical trials with patient-reported outcomes.
In 2020 we published MetaBLIND, a large meta-epidemiological study on the impact of masking on effect estimates in randomized clinical trials (RCTs). While masking is an established methodological ...practice in RCTs it is not clear to what extent results of non-masked RCTs are biased. Surprisingly, we found no evidence of an impact of masking on effect estimates, on average. Michiel Tack commented on the MetaBLIND study, and here we respond. The issues he raised were examples of standard themes when interpreting meta-epidemiological studies, which we have discussed at some length elsewhere, and did not warrant change of our conclusion. We maintain that the MetaBLIND results do not provide a sufficient basis for recommending abandoning masking as a methodological safeguard.
The minimum clinically important difference (MCID) is used to interpret the relevance of treatment effects, e.g., when developing clinical guidelines, evaluating trial results or planning sample ...sizes. There is currently no agreement on an appropriate MCID in chronic pain and little is known about which contextual factors cause variation.
This is a systematic review. We searched PubMed, EMBASE, and Cochrane Library. Eligible studies determined MCID for chronic pain based on a one-dimensional pain scale, a patient-reported transition scale of perceived improvement, and either a mean change analysis (mean difference in pain among minimally improved patients) or a threshold analysis (pain reduction associated with best sensitivity and specificity for identifying minimally improved patients). Main results were descriptively summarized due to considerable heterogeneity, which were quantified using meta-analyses and explored using subgroup analyses and metaregression.
We included 66 studies (31.254 patients). Median absolute MCID was 23 mm on a 0–100 mm scale (interquartile range IQR 12–39) and median relative MCID was 34% (IQR 22–45) among studies using the mean change approach. In both cases, heterogeneity was very high: absolute MCID I2 = 99% and relative MCID I2 = 96%. High variation was also seen among studies using the threshold approach: median absolute MCID was 20 mm (IQR 15–30) and relative MCID was 32% (IQR 15–41). Absolute MCID was strongly associated with baseline pain, explaining approximately two-thirds of the variation, and to a lesser degree with the operational definition of minimum pain relief and clinical condition. A total of 15 clinical and methodological factors were assessed as possible causes for variation in MCID.
MCID for chronic pain relief vary considerably. Baseline pain is strongly associated with absolute, but not relative, measures. To a much lesser degree, MCID is also influenced by the operational definition of relevant pain relief and possibly by clinical condition. Explicit and conscientious reflections on the choice of an MCID are required when classifying effect sizes as clinically important or trivial.
To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified ...and selected) and what they found (such as characteristics of contributing studies and results of meta-analyses). ...technological advances have enabled the use of natural language processing and machine learning to identify relevant evidence 22,23,24, methods have been proposed to synthesise and present findings when meta-analysis is not possible or appropriate 25,26,27, and new methods have been developed to assess the risk of bias in results of included studies 28, 29. ...the publishing landscape has transformed, with multiple avenues now available for registering and disseminating systematic review protocols 33, 34, disseminating reports of systematic reviews, and sharing data and materials, such as preprint servers and publicly accessible repositories. ...extensions to the PRISMA 2009 statement have been developed to guide reporting of network meta-analyses 49, meta-analyses of individual participant data 50, systematic reviews of harms 51, systematic reviews of diagnostic test accuracy studies 52, and scoping reviews 53; for these types of reviews we recommend authors report their review in accordance with the recommendations in PRISMA 2020 along with the guidance specific to the extension.
The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, ...what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.
The methods and results of systematic reviews should be reported in sufficient detail to allow users to assess the trustworthiness and applicability of the review findings. The Preferred Reporting ...Items for Systematic reviews and Meta-Analyses (PRISMA) statement was developed to facilitate transparent and complete reporting of systematic reviews and has been updated (to PRISMA 2020) to reflect recent advances in systematic review methodology and terminology. Here, we present the explanation and elaboration paper for PRISMA 2020, where we explain why reporting of each item is recommended, present bullet points that detail the reporting recommendations, and present examples from published reviews. We hope that changes to the content and structure of PRISMA 2020 will facilitate uptake of the guideline and lead to more transparent, complete, and accurate reporting of systematic reviews.
To synthesise evidence on the average bias and heterogeneity associated with reported methodological features of randomized trials.
Systematic review of meta-epidemiological studies.
We retrieved ...eligible studies included in a recent AHRQ-EPC review on this topic (latest search September 2012), and searched Ovid MEDLINE and Ovid EMBASE for studies indexed from Jan 2012-May 2015. Data were extracted by one author and verified by another. We combined estimates of average bias (e.g. ratio of odds ratios (ROR) or difference in standardised mean differences (dSMD)) in meta-analyses using the random-effects model. Analyses were stratified by type of outcome ("mortality" versus "other objective" versus "subjective"). Direction of effect was standardised so that ROR < 1 and dSMD < 0 denotes a larger intervention effect estimate in trials with an inadequate or unclear (versus adequate) characteristic.
We included 24 studies. The available evidence suggests that intervention effect estimates may be exaggerated in trials with inadequate/unclear (versus adequate) sequence generation (ROR 0.93, 95% CI 0.86 to 0.99; 7 studies) and allocation concealment (ROR 0.90, 95% CI 0.84 to 0.97; 7 studies). For these characteristics, the average bias appeared to be larger in trials of subjective outcomes compared with other objective outcomes. Also, intervention effects for subjective outcomes appear to be exaggerated in trials with lack of/unclear blinding of participants (versus blinding) (dSMD -0.37, 95% CI -0.77 to 0.04; 2 studies), lack of/unclear blinding of outcome assessors (ROR 0.64, 95% CI 0.43 to 0.96; 1 study) and lack of/unclear double blinding (ROR 0.77, 95% CI 0.61 to 0.93; 1 study). The influence of other characteristics (e.g. unblinded trial personnel, attrition) is unclear.
Certain characteristics of randomized trials may exaggerate intervention effect estimates. The average bias appears to be greatest in trials of subjective outcomes. More research on several characteristics, particularly attrition and selective reporting, is needed.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Recognizing the value of promoting public access to clinical trial protocols, Trials pioneered the way for their publication over a decade ago. However, despite major advances in the public ...accessibility of information about trial methods and results, protocol sharing remains relatively rare.
Protocol sharing facilitates the critical appraisal of clinical trials and helps to identify and deter the selective reporting of outcomes and analyses. Challenges to the routine availability of high quality trial protocols include the gaps in incentives and adherence mechanisms, limited venues for sharing the original and final protocol versions, and the need for mechanisms to ensure transparent and complete protocol content.
We propose recommendations for addressing key challenges to protocol sharing in order to promote routine public access to protocols for the benefit of patients and other users of evidence from clinical trials.