We describe a framework for defining pilot and feasibility studies focusing on studies conducted in preparation for a randomised controlled trial. To develop the framework, we undertook a Delphi ...survey; ran an open meeting at a trial methodology conference; conducted a review of definitions outside the health research context; consulted experts at an international consensus meeting; and reviewed 27 empirical pilot or feasibility studies. We initially adopted mutually exclusive definitions of pilot and feasibility studies. However, some Delphi survey respondents and the majority of open meeting attendees disagreed with the idea of mutually exclusive definitions. Their viewpoint was supported by definitions outside the health research context, the use of the terms 'pilot' and 'feasibility' in the literature, and participants at the international consensus meeting. In our framework, pilot studies are a subset of feasibility studies, rather than the two being mutually exclusive. A feasibility study asks whether something can be done, should we proceed with it, and if so, how. A pilot study asks the same questions but also has a specific design feature: in a pilot study a future study, or part of a future study, is conducted on a smaller scale. We suggest that to facilitate their identification, these studies should be clearly identified using the terms 'feasibility' or 'pilot' as appropriate. This should include feasibility studies that are largely qualitative; we found these difficult to identify in electronic searches because researchers rarely used the term 'feasibility' in the title or abstract of such studies. Investigators should also report appropriate objectives and methods related to feasibility; and give clear confirmation that their study is in preparation for a future randomised controlled trial designed to assess the effect of an intervention.
The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this ...article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply.The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist.The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number.This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials.Editor’s note: In order to encourage its wide dissemination this article is freely accessible on the BMJ and Pilot and Feasibility Studies journal websites.
Background Cluster randomized trials are increasingly popular. In many of these trials, cluster sizes are unequal. This can affect trial power, but standard sample size formulae for these trials ...ignore this. Previous studies addressing this issue have mostly focused on continuous outcomes or methods that are sometimes difficult to use in practice. Methods We show how a simple formula can be used to judge the possible effect of unequal cluster sizes for various types of analyses and both continuous and binary outcomes. We explore the practical estimation of the coefficient of variation of cluster size required in this formula and demonstrate the formula's performance for a hypothetical but typical trial randomizing UK general practices. Results The simple formula provides a good estimate of sample size requirements for trials analysed using cluster-level analyses weighting by cluster size and a conservative estimate for other types of analyses. For trials randomizing UK general practices the coefficient of variation of cluster size depends on variation in practice list size, variation in incidence or prevalence of the medical condition under examination, and practice and patient recruitment strategies, and for many trials is expected to be ∼0.65. Individual-level analyses can be noticeably more efficient than some cluster-level analyses in this context. Conclusions When the coefficient of variation is <0.23, the effect of adjustment for variable cluster size on sample size is negligible. Most trials randomizing UK general practices and many other cluster randomized trials should account for variable cluster size in their sample size calculations.
The Consolidated Standards of Reporting Trials (CONSORT) statement is a guideline designed to improve the transparency and quality of the reporting of randomised controlled trials (RCTs). In this ...article we present an extension to that statement for randomised pilot and feasibility trials conducted in advance of a future definitive RCT. The checklist applies to any randomised study in which a future definitive RCT, or part of it, is conducted on a smaller scale, regardless of its design (eg, cluster, factorial, crossover) or the terms used by authors to describe the study (eg, pilot, feasibility, trial, study). The extension does not directly apply to internal pilot studies built into the design of a main trial, non-randomised pilot and feasibility studies, or phase II studies, but these studies all have some similarities to randomised pilot and feasibility studies and so many of the principles might also apply. The development of the extension was motivated by the growing number of studies described as feasibility or pilot studies and by research that has identified weaknesses in their reporting and conduct. We followed recommended good practice to develop the extension, including carrying out a Delphi survey, holding a consensus meeting and research team meetings, and piloting the checklist. The aims and objectives of pilot and feasibility randomised studies differ from those of other randomised trials. Consequently, although much of the information to be reported in these trials is similar to those in randomised controlled trials (RCTs) assessing effectiveness and efficacy, there are some key differences in the type of information and in the appropriate interpretation of standard CONSORT reporting items. We have retained some of the original CONSORT statement items, but most have been adapted, some removed, and new items added. The new items cover how participants were identified and consent obtained; if applicable, the prespecified criteria used to judge whether or how to proceed with a future definitive RCT; if relevant, other important unintended consequences; implications for progression from pilot to future definitive RCT, including any proposed amendments; and ethical approval or approval by a research review committee confirmed with a reference number. This article includes the 26 item checklist, a separate checklist for the abstract, a template for a CONSORT flowchart for these studies, and an explanation of the changes made and supporting examples. We believe that routine use of this proposed extension to the CONSORT statement will result in improvements in the reporting of pilot trials. Editor's note: In order to encourage its wide dissemination this article is freely accessible on
and
journal websites.
Background Pragmatic trials aim to generate evidence to directly inform patient, caregiver and health-system manager policies and decisions. Heterogeneity in patient characteristics contributes to ...heterogeneity in their response to the intervention. However, there are many other sources of heterogeneity in outcomes. Based on the expertise and judgements of the authors, we identify different sources of clinical and methodological heterogeneity, which translate into heterogeneity in patient responses--some we consider as desirable and some as undesirable. For each of them, we discuss and, using real-world trial examples, illustrate how heterogeneity should be managed over the whole course of the trial. Main text Heterogeneity in centres and patients should be welcomed rather than limited. Interventions can be flexible or tailored and control interventions are expected to reflect usual care, avoiding use of a placebo. Co-interventions should be allowed; adherence should not be enforced. All these elements introduce heterogeneity in interventions (experimental or control), which has to be welcomed because it mimics reality. Outcomes should be objective and possibly routinely collected; standardised assessment, blinding and adjudication should be avoided as much as possible because this is not how assessment would be done outside a trial setting. The statistical analysis strategy must be guided by the objective to inform decision-making, thus favouring the intention-to-treat principle. Pragmatic trials should consider including process analyses to inform an understanding of the trial results. Needed data to conduct these analyses should be collected unobtrusively. Finally, ethical principles must be respected, even though this may seem to conflict with goals of pragmatism; consent procedures could be incorporated in the flow of care. Keywords: Pragmatic randomised trials, Heterogeneity, Cluster randomised trials
Assessment of risk of bias is regarded as an essential component of a systematic review on the effects of an intervention. The most commonly used tool for randomised trials is the Cochrane ...risk-of-bias tool. We updated the tool to respond to developments in understanding how bias arises in randomised trials, and to address user feedback on and limitations of the original tool.
Implementation studies are often poorly reported and indexed, reducing their potential to inform initiatives to improve healthcare services. The Standards for Reporting Implementation Studies (StaRI) ...initiative aimed to develop guidelines for transparent and accurate reporting of implementation studies. Informed by the findings of a systematic review and a consensus-building e-Delphi exercise, an international working group of implementation science experts discussed and agreed the StaRI Checklist comprising 27 items. It prompts researchers to describe both the implementation strategy (techniques used to promote implementation of an underused evidence-based intervention) and the effectiveness of the intervention that was being implemented. An accompanying Explanation and Elaboration document (published in BMJ Open, doi:10.1136/bmjopen-2016-013318) details each of the items, explains the rationale, and provides examples of good reporting practice. Adoption of StaRI will improve the reporting of implementation studies, potentially facilitating translation of research into practice and improving the health of individuals and populations.
This study is to identify, summarise and synthesise literature on the causes of the evidence to practice gap for complex interventions in primary care.
This study is a systematic review of reviews.
...MEDLINE, EMBASE, CINAHL, Cochrane Library and PsychINFO were searched, from inception to December 2013. Eligible reviews addressed causes of the evidence to practice gap in primary care in developed countries. Data from included reviews were extracted and synthesised using guidelines for meta-synthesis.
Seventy reviews fulfilled the inclusion criteria and encompassed a wide range of topics, e.g. guideline implementation, integration of new roles, technology implementation, public health and preventative medicine. None of the included papers used the term "cause" or stated an intention to investigate causes at all. A descriptive approach was often used, and the included papers expressed "causes" in terms of "barriers and facilitators" to implementation. We developed a four-level framework covering external context, organisation, professionals and intervention. External contextual factors included policies, incentivisation structures, dominant paradigms, stakeholders' buy-in, infrastructure and advances in technology. Organisation-related factors included culture, available resources, integration with existing processes, relationships, skill mix and staff involvement. At the level of individual professionals, professional role, underlying philosophy of care and competencies were important. Characteristics of the intervention that impacted on implementation included evidence of benefit, ease of use and adaptability to local circumstances. We postulate that the "fit" between the intervention and the context is critical in determining the success of implementation.
This comprehensive review of reviews summarises current knowledge on the barriers and facilitators to implementation of diverse complex interventions in primary care. To maximise the uptake of complex interventions in primary care, health care professionals and commissioning organisations should consider the range of contextual factors, remaining aware of the dynamic nature of context. Future studies should place an emphasis on describing context and articulating the relationships between the factors identified here.
PROSPERO CRD42014009410.