Summary Background Claims about what improves or harms our health are ubiquitous. People need to be able to assess the reliability of these claims. We aimed to evaluate an intervention designed to ...teach primary school children to assess claims about the effects of treatments (ie, any action intended to maintain or improve health). Methods In this cluster-randomised controlled trial, we included primary schools in the central region of Uganda that taught year-5 children (aged 10–12 years). We excluded international schools, special needs schools for children with auditory and visual impairments, schools that had participated in user-testing and piloting of the resources, infant and nursery schools, adult education schools, and schools that were difficult for us to access in terms of travel time. We randomly allocated a representative sample of eligible schools to either an intervention or control group. Intervention schools received the Informed Health Choices primary school resources (textbooks, exercise books, and a teachers' guide). Teachers attended a 2 day introductory workshop and gave nine 80 min lessons during one school term. The lessons addressed 12 concepts essential to assessing claims about treatment effects and making informed health choices. We did not intervene in the control schools. The primary outcome, measured at the end of the school term, was the mean score on a test with two multiple-choice questions for each of the 12 concepts and the proportion of children with passing scores on the same test. This trial is registered with the Pan African Clinical Trial Registry, number PACTR201606001679337. Findings Between April 11, 2016, and June 8, 2016, 2960 schools were assessed for eligibility; 2029 were eligible, and a random sample of 170 were invited to recruitment meetings. After recruitment meetings, 120 eligible schools consented and were randomly assigned to either the intervention group (n=60, 76 teachers and 6383 children) or control group (n=60, 67 teachers and 4430 children). The mean score in the multiple-choice test for the intervention schools was 62·4% (SD 18·8) compared with 43·1% (15·2) for the control schools (adjusted mean difference 20·0%, 95% CI 17·3–22·7; p<0·00001). In the intervention schools, 3967 (69%) of 5753 children achieved a predetermined passing score (≥13 of 24 correct answers) compared with 1186 (27%) of 4430 children in the control schools (adjusted difference 50%, 95% CI 44–55). The intervention was effective for children with different levels of reading skills, but was more effective for children with better reading skills. Interpretation The use of the Informed Health Choices primary school learning resources, after an introductory workshop for the teachers, led to a large improvement in the ability of children to assess claims about the effects of treatments. The results show that it is possible to teach primary school children to think critically in schools with large student to teacher ratios and few resources. Future studies should address how to scale up use of the resources, long-term effects, including effects on actual health choices, transferability to other countries, and how to build on this programme with additional primary and secondary school learning resources. Funding Research Council of Norway.
Audit and feedback is widely used as a strategy to improve professional practice either on its own or as a component of multifaceted quality improvement interventions. This is based on the belief ...that healthcare professionals are prompted to modify their practice when given performance feedback showing that their clinical practice is inconsistent with a desirable target. Despite its prevalence as a quality improvement strategy, there remains uncertainty regarding both the effectiveness of audit and feedback in improving healthcare practice and the characteristics of audit and feedback that lead to greater impact.
To assess the effects of audit and feedback on the practice of healthcare professionals and patient outcomes and to examine factors that may explain variation in the effectiveness of audit and feedback.
We searched the Cochrane Central Register of Controlled Trials (CENTRAL) 2010, Issue 4, part of The Cochrane Library. www.thecochranelibrary.com, including the Cochrane Effective Practice and Organisation of Care (EPOC) Group Specialised Register (searched 10 December 2010); MEDLINE, Ovid (1950 to November Week 3 2010) (searched 09 December 2010); EMBASE, Ovid (1980 to 2010 Week 48) (searched 09 December 2010); CINAHL, Ebsco (1981 to present) (searched 10 December 2010); Science Citation Index and Social Sciences Citation Index, ISI Web of Science (1975 to present) (searched 12-15 September 2011).
Randomised trials of audit and feedback (defined as a summary of clinical performance over a specified period of time) that reported objectively measured health professional practice or patient outcomes. In the case of multifaceted interventions, only trials in which audit and feedback was considered the core, essential aspect of at least one intervention arm were included.
All data were abstracted by two independent review authors. For the primary outcome(s) in each study, we calculated the median absolute risk difference (RD) (adjusted for baseline performance) of compliance with desired practice compliance for dichotomous outcomes and the median percent change relative to the control group for continuous outcomes. Across studies the median effect size was weighted by number of health professionals involved in each study. We investigated the following factors as possible explanations for the variation in the effectiveness of interventions across comparisons: format of feedback, source of feedback, frequency of feedback, instructions for improvement, direction of change required, baseline performance, profession of recipient, and risk of bias within the trial itself. We also conducted exploratory analyses to assess the role of context and the targeted clinical behaviour. Quantitative (meta-regression), visual, and qualitative analyses were undertaken to examine variation in effect size related to these factors.
We included and analysed 140 studies for this review. In the main analyses, a total of 108 comparisons from 70 studies compared any intervention in which audit and feedback was a core, essential component to usual care and evaluated effects on professional practice. After excluding studies at high risk of bias, there were 82 comparisons from 49 studies featuring dichotomous outcomes, and the weighted median adjusted RD was a 4.3% (interquartile range (IQR) 0.5% to 16%) absolute increase in healthcare professionals' compliance with desired practice. Across 26 comparisons from 21 studies with continuous outcomes, the weighted median adjusted percent change relative to control was 1.3% (IQR = 1.3% to 28.9%). For patient outcomes, the weighted median RD was -0.4% (IQR -1.3% to 1.6%) for 12 comparisons from six studies reporting dichotomous outcomes and the weighted median percentage change was 17% (IQR 1.5% to 17%) for eight comparisons from five studies reporting continuous outcomes. Multivariable meta-regression indicated that feedback may be more effective when baseline performance is low, the source is a supervisor or colleague, it is provided more than once, it is delivered in both verbal and written formats, and when it includes both explicit targets and an action plan. In addition, the effect size varied based on the clinical behaviour targeted by the intervention.
Audit and feedback generally leads to small but potentially important improvements in professional practice. The effectiveness of audit and feedback seems to depend on baseline performance and how the feedback is provided. Future studies of audit and feedback should directly compare different ways of providing feedback.
The world is awash with claims about the effects of health interventions. Many of these claims are untrustworthy because the bases are unreliable. Acting on unreliable claims can lead to waste of ...resources and poor health outcomes. Yet, most people lack the necessary skills to appraise the reliability of health claims. The Informed Health Choices (IHC) project aims to equip young people in Ugandan lower secondary schools with skills to think critically about health claims and to make good health choices by developing and evaluating digital learning resources. To ensure that we create resources that are suitable for use in Uganda's secondary schools and can be scaled up if found effective, we conducted a context analysis. We aimed to better understand opportunities and barriers related to demand for the resources, how the learning content overlaps with existing curriculum and conditions in secondary schools for accessing and using digital resources, in order to inform resource development.
We used a mixed methods approach and collected both qualitative and quantitative data. We conducted document analyses, key informant interviews, focus group discussions, school visits, and a telephone survey regarding information communication and technology (ICT). We used a nominal group technique to obtain consensus on the appropriate number and length of IHC lessons that should be planned in a school term. We developed and used a framework from the objectives to code the transcripts and generated summaries of query reports in Atlas.ti version 7.
Critical thinking is a key competency in the lower secondary school curriculum. However, the curriculum does not explicitly make provision to teach critical thinking about health, despite a need acknowledged by curriculum developers, teachers and students. Exam oriented teaching and a lack of learning resources are additional important barriers to teaching critical thinking about health. School closures and the subsequent introduction of online learning during the COVID-19 pandemic has accelerated teachers' use of digital equipment and learning resources for teaching. Although the government is committed to improving access to ICT in schools and teachers are open to using ICT, access to digital equipment, unreliable power and internet connections remain important hinderances to use of digital learning resources.
There is a recognized need for learning resources to teach critical thinking about health in Ugandan lower secondary schools. Digital learning resources should be designed to be usable even in schools with limited access and equipment. Teacher training on use of ICT for teaching is needed.
Adolescents encounter misleading claims about health interventions that can affect their health. Young people need to develop critical thinking skills to enable them to verify health claims and make ...informed choices. Schools could teach these important life skills, but educators need access to suitable learning resources that are aligned with their curriculum. The overall objective of this context analysis was to explore conditions for teaching critical thinking about health interventions using digital technology to lower secondary school students in Rwanda.
We undertook a qualitative descriptive study using four methods: document review, key informant interviews, focus group discussions, and observations. We reviewed 29 documents related to the national curriculum and ICT conditions in secondary schools. We conducted 8 interviews and 5 focus group discussions with students, teachers, and policy makers. We observed ICT conditions and use in five schools. We analysed the data using a framework analysis approach.
Two major themes found. The first was demand for teaching critical thinking about health. The current curriculum explicitly aims to develop critical thinking competences in students. Critical thinking and health topics are taught across subjects. But understanding and teaching of critical thinking varies among teachers, and critical thinking about health is not being taught. The second theme was the current and expected ICT conditions. Most public schools have computers, projectors, and internet connectivity. However, use of ICT in teaching is limited, due in part to low computer to student ratios.
There is a need for learning resources to develop critical thinking skills generally and critical thinking about health specifically. Such skills could be taught within the existing curriculum using available ICT technologies. Digital resources for teaching critical thinking about health should be designed so that they can be used flexibly across subjects and easily by teachers and students.
Correspondence to: A D Oxman oxman@online.no Summary points Clinicians, guideline developers, and policymakers sometimes neglect important criteria, give undue weight to criteria, and do not use the ...best available evidence to inform their judgments Explicit and transparent systems for decision making can help to ensure that all important criteria are considered and that decisions are informed by the best available research evidence The purpose of Evidence to Decision (EtD) frameworks is to help people use evidence in a structured and transparent way to inform decisions in the context of clinical recommendations, coverage decisions, and health system or public health recommendations and decisions EtD frameworks have a common structure that includes formulation of the question, an assessment of the evidence, and drawing conclusions, though there are some differences between frameworks for each type of decision EtD frameworks inform users about the judgments that were made and the evidence supporting those judgments by making the basis for decisions transparent to target audiences EtD frameworks also facilitate dissemination of recommendations and enable decision makers in other jurisdictions to adopt recommendations or decisions, or adapt them to their context Introduction Healthcare decision making is complex. Decision-making processes and the factors (criteria) that decision makers should consider vary for different types of decisions, including clinical recommendations, coverage decisions, and health system or public health recommendations or decisions.1 2 3 4 However, some criteria are relevant for all of these decisions, including the anticipated effects of the options being considered, the certainty of the evidence for those effects (also referred to as quality of evidence or confidence in effect estimates), and the costs and feasibility of the options. Rigorously developed guidelines synthesise the available relevant research, facilitating the translation of evidence into recommendations for clinical practice.9 However, the quality of guidelines is often suboptimal.10 11 If guidelines are not developed systematically and transparently, clinicians are not able to decide whether to rely on them or to explore disagreements when faced with conflicting recommendations.12 The GRADE (Grading of Recommendations Assessment, Development and Evaluation) Working Group has previously developed and refined a system to assess the certainty of evidence of effects and strength of recommendations.13 14 15 More than 100 organisations globally, including the World Health Organization, the Cochrane Collaboration, and the National Institute for Health and Care Excellence (NICE) now use or have adopted the principles of the GRADE system. Cure by 120 weeks, adverse drug reactions (clinical and biological serious adverse events), mortality, time to culture conversion, culture conversion at 24 weeks, acquired resistance to fluoroquinolone and injectable drugs Setting: Global, MDR-TB clinics Perspective: Population perspective (health system) Subgroups: Patients with extensively drug-resistant (XDR) or pre-XDR tuberculosis or those with resistance or contraindication to fluoroquinolones or injectables Background: The emergence of drug resistance is a major threat to global tuberculosis care and control.
Correspondence to: P Alonso-Coello palonso@santpau.cat Summary points Clinicians do not have the time or resources to consider the underlying evidence for the myriad decisions they must make each day ...and, as a consequence, rely on recommendations from clinical practice guidelines Guideline panels should consider all the relevant factors (criteria) that influence a decision or recommendation in a structured, explicit, and transparent way and provide clinicians with clear and actionable recommendations The GRADE working group has developed Evidence to Decision (EtD) frameworks for different types of decisions and recommendations. In this article we will describe EtD frameworks for clinical practice recommendations The general structure of the EtD framework for clinical recommendations is similar to EtD frameworks for other types of recommendations and decisions, and includes formulation of the question, an assessment of the different criteria, and conclusions Clinical recommendations require considering criteria differently, depending on whether an individual patient or a population perspective is taken. To ensure trustworthiness, clinical practice guidelines are made by groups of people (guideline panels) with relevant skills, perspectives, and knowledge; they are informed by the best available evidence; and they are systematically developed.1 2 3 4 In the first article in this series, we described GRADE Evidence to Decision (EtD) frameworks and their rationale for different types of decisions.5 In this second article, we describe the use of EtD frameworks for clinical recommendations and how they can help clinicians and patients who use those recommendations. Death, stroke, major bleeding, myocardial infarction, treatment burden Setting: High resource setting Perspective: Health system Subgroups: Patients who are well controlled with warfarin Background: Warfarin reduces the risk for ischaemic stroke in patients with atrial fibrillation but increases the risk for haemorrhage and requires frequent blood tests and clinic visits to monitor the international normalised ratio (INR) and adjust the dose.
Background The CONSORT statement is intended to improve reporting of randomised controlled trials and focuses on minimising the risk of bias (internal validity). The applicability of a trial’s ...results (generalisability or external validity) is also important, particularly for pragmatic trials. A pragmatic trial (a term first used in 1967 by Schwartz and Lellouch) can be broadly defined as a randomised controlled trial whose purpose is to inform decisions about practice. This extension of the CONSORT statement is intended to improve the reporting of such trials and focuses on applicability. Methods At two, two-day meetings held in Toronto in 2005 and 2008, we reviewed the CONSORT statement and its extensions, the literature on pragmatic trials and applicability, and our experiences in conducting pragmatic trials. Recommendations We recommend extending eight CONSORT checklist items for reporting of pragmatic trials: the background, participants, interventions, outcomes, sample size, blinding, participant flow, and generalisability of the findings. These extensions are presented, along with illustrative examples of reporting, and an explanation of each extension. Adherence to these reporting criteria will make it easier for decision makers to judge how applicable the results of randomised controlled trials are to their own conditions. Empirical studies are needed to ascertain the usefulness and comprehensiveness of these CONSORT checklist item extensions. In the meantime we recommend that those who support, conduct, and report pragmatic trials should use this extension of the CONSORT statement to facilitate the use of trial results in decisions about health care.
Abstract In the GRADE approach, the strength of a recommendation reflects the extent to which we can be confident that the composite desirable effects of a management strategy outweigh the composite ...undesirable effects. This article addresses GRADE's approach to determining the direction and strength of a recommendation. The GRADE describes the balance of desirable and undesirable outcomes of interest among alternative management strategies depending on four domains, namely estimates of effect for desirable and undesirable outcomes of interest, confidence in the estimates of effect, estimates of values and preferences, and resource use. Ultimately, guideline panels must use judgment in integrating these factors to make a strong or weak recommendation for or against an intervention.
The tendency for authors to submit, and of journals to accept, manuscripts for publication based on the direction or strength of the study findings has been termed publication bias.
To assess the ...extent to which publication of a cohort of clinical trials is influenced by the statistical significance, perceived importance, or direction of their results.
We searched the Cochrane Methodology Register (The Cochrane Library Online Issue 2, 2007), MEDLINE (1950 to March Week 2 2007), EMBASE (1980 to Week 11 2007) and Ovid MEDLINE In-Process & Other Non-Indexed Citations (March 21 2007). We also searched the Science Citation Index (April 2007), checked reference lists of relevant articles and contacted researchers to identify additional studies.
Studies containing analyses of the association between publication and the statistical significance or direction of the results (trial findings), for a cohort of registered clinical trials.
Two authors independently extracted data. We classified findings as either positive (defined as results classified by the investigators as statistically significant (P < 0.05), or perceived as striking or important, or showing a positive direction of effect) or negative (findings that were not statistically significant (P >/= 0.05), or perceived as unimportant, or showing a negative or null direction in effect). We extracted information on other potential risk factors for failure to publish, when these data were available.
Five studies were included. Trials with positive findings were more likely to be published than trials with negative or null findings (odds ratio 3.90; 95% confidence interval 2.68 to 5.68). This corresponds to a risk ratio of 1.78 (95% CI 1.58 to 1.95), assuming that 41% of negative trials are published (the median among the included studies, range = 11% to 85%). In absolute terms, this means that if 41% of negative trials are published, we would expect that 73% of positive trials would be published.Two studies assessed time to publication and showed that trials with positive findings tended to be published after four to five years compared to those with negative findings, which were published after six to eight years. Three studies found no statistically significant association between sample size and publication. One study found no significant association between either funding mechanism, investigator rank, or sex and publication.
Trials with positive findings are published more often, and more quickly, than trials with negative findings.
Abstract The “Grades of Recommendation, Assessment, Development, and Evaluation” (GRADE) approach provides guidance for rating quality of evidence and grading strength of recommendations in health ...care. It has important implications for those summarizing evidence for systematic reviews, health technology assessment, and clinical practice guidelines. GRADE provides a systematic and transparent framework for clarifying questions, determining the outcomes of interest, summarizing the evidence that addresses a question, and moving from the evidence to a recommendation or decision. Wide dissemination and use of the GRADE approach, with endorsement from more than 50 organizations worldwide, many highly influential ( http://www.gradeworkinggroup.org/ ), attests to the importance of this work. This article introduces a 20-part series providing guidance for the use of GRADE methodology that will appear in the Journal of Clinical Epidemiology.