The CONSORT 2010 statement provides minimum guidelines for reporting randomized trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More ...recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance ...has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human-AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the design and risk of bias for a planned clinical trial.
Increasingly, researchers need to demonstrate the impact of their research to their sponsors, funders, and fellow academics. However, the most appropriate way of measuring the impact of healthcare ...research is subject to debate. We aimed to identify the existing methodological frameworks used to measure healthcare research impact and to summarise the common themes and metrics in an impact matrix.
Two independent investigators systematically searched the Medical Literature Analysis and Retrieval System Online (MEDLINE), the Excerpta Medica Database (EMBASE), the Cumulative Index to Nursing and Allied Health Literature (CINAHL+), the Health Management Information Consortium, and the Journal of Research Evaluation from inception until May 2017 for publications that presented a methodological framework for research impact. We then summarised the common concepts and themes across methodological frameworks and identified the metrics used to evaluate differing forms of impact. Twenty-four unique methodological frameworks were identified, addressing 5 broad categories of impact: (1) 'primary research-related impact', (2) 'influence on policy making', (3) 'health and health systems impact', (4) 'health-related and societal impact', and (5) 'broader economic impact'. These categories were subdivided into 16 common impact subgroups. Authors of the included publications proposed 80 different metrics aimed at measuring impact in these areas. The main limitation of the study was the potential exclusion of relevant articles, as a consequence of the poor indexing of the databases searched.
The measurement of research impact is an essential exercise to help direct the allocation of limited research resources, to maximise research benefit, and to help minimise research waste. This review provides a collective summary of existing methodological frameworks for research impact, which funders may use to inform the measurement of research impact and researchers may use to inform study design decisions aimed at maximising the short-, medium-, and long-term impact of their research.
Globally, there are now over 160 million confirmed cases of COVID-19 and more than 3 million deaths. While the majority of infected individuals recover, a significant proportion continue to ...experience symptoms and complications after their acute illness. Patients with ‘long COVID’ experience a wide range of physical and mental/psychological symptoms. Pooled prevalence data showed the 10 most prevalent reported symptoms were fatigue, shortness of breath, muscle pain, joint pain, headache, cough, chest pain, altered smell, altered taste and diarrhoea. Other common symptoms were cognitive impairment, memory loss, anxiety and sleep disorders. Beyond symptoms and complications, people with long COVID often reported impaired quality of life, mental health and employment issues. These individuals may require multidisciplinary care involving the long-term monitoring of symptoms, to identify potential complications, physical rehabilitation, mental health and social services support. Resilient healthcare systems are needed to ensure efficient and effective responses to future health challenges.
Background
Patient-reported outcomes (PROs) are increasingly collected in clinical trials as they provide unique information on the physical, functional and psychological impact of a treatment from ...the patient’s perspective. Recent research suggests that PRO trial data have the potential to inform shared decision-making, support pharmaceutical labelling claims and influence healthcare policy and practice. However, there remains limited evidence regarding the actual impact associated with PRO trial data and how to maximise PRO impact to benefit patients and society. Thus, our objective was to qualitatively explore international stakeholders’ perspectives surrounding:
a)
the impact of PRO trial data,
b)
impact measurement metrics, and
c)
barriers and facilitators to effectively maximise the impact of PRO trial data upon patients and society.
Methods
Semi-structured interviews with 24 international stakeholders were conducted between May and October 2018. Data were coded and analysed using reflexive thematic analysis.
Results
International stakeholders emphasised the impact of PRO trial data to benefit patients and society. Influence on policy-impact, including changes to clinical healthcare practice and guidelines, drug approval and promotional labelling claims were common types of PRO impact reported by interviewees. Interviewees suggested impact measurement metrics including: number of pharmaceutical labelling claims and interviews with healthcare practitioners to determine whether PRO data were incorporated in clinical decision-making. Key facilitators to PRO impact highlighted by stakeholders included: standardisation of PRO tools; consideration of health utilities when selecting PRO measures; adequate funding to support PRO research; improved reporting and dissemination of PRO trial data by key opinion leaders and patients; and development of legal enforcement of the collection of PRO data.
Conclusions
Determining the impact of PRO trial data is essential to better allocate funds, minimise research waste and to help maximise the impact of these data for patients and society. However, measuring the impact of PRO trial data through metrics is a challenging task, as current measures do not capture the total impact of PRO research. Broader international multi-stakeholder engagement and collaboration is needed to standardise PRO assessment and maximise the impact of PRO trial data to benefit patients and society.
The CONSORT 2010 statement provides minimum guidelines for reporting randomised trials. Its widespread use has been instrumental in ensuring transparency in the evaluation of new interventions. More ...recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate impact on health outcomes. The CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trials evaluating interventions with an AI component. It was developed in parallel with its companion statement for clinical trial protocols: SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 29 candidate items, which were assessed by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a two-day consensus meeting (31 stakeholders), and refined through a checklist pilot (34 participants). The CONSORT-AI extension includes 14 new items that were considered sufficiently important for AI interventions that they should be routinely reported in addition to the core CONSORT 2010 items. CONSORT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human–AI interaction and provision of an analysis of error cases. CONSORT-AI will help promote transparency and completeness in reporting clinical trials for AI interventions. It will assist editors and peer reviewers, as well as the general readership, to understand, interpret, and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes.
The SPIRIT 2013 statement aims to improve the completeness of clinical trial protocol reporting by providing evidence-based recommendations for the minimum set of items to be addressed. This guidance ...has been instrumental in promoting transparent evaluation of new interventions. More recently, there has been a growing recognition that interventions involving artificial intelligence (AI) need to undergo rigorous, prospective evaluation to demonstrate their impact on health outcomes. The SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials-Artificial Intelligence) extension is a new reporting guideline for clinical trial protocols evaluating interventions with an AI component. It was developed in parallel with its companion statement for trial reports: CONSORT-AI (Consolidated Standards of Reporting Trials-Artificial Intelligence). Both guidelines were developed through a staged consensus process involving literature review and expert consultation to generate 26 candidate items, which were consulted upon by an international multi-stakeholder group in a two-stage Delphi survey (103 stakeholders), agreed upon in a consensus meeting (31 stakeholders) and refined through a checklist pilot (34 participants). The SPIRIT-AI extension includes 15 new items that were considered sufficiently important for clinical trial protocols of AI interventions. These new items should be routinely reported in addition to the core SPIRIT 2013 items. SPIRIT-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skills required for use, the setting in which the AI intervention will be integrated, considerations for the handling of input and output data, the human–AI interaction and analysis of error cases. SPIRIT-AI will help promote transparency and completeness for clinical trial protocols for AI interventions. Its use will assist editors and peer reviewers, as well as the general readership, to understand, interpret, and critically appraise the design and risk of bias for a planned clinical trial.
Randomized clinical trials are critical for evaluating the safety and efficacy of interventions in oncology and informing regulatory decisions, practice guidelines, and health policy. ...Patient-reported outcomes (PROs) are increasingly used in randomized trials to reflect the impact of receiving cancer therapies from the patient perspective and can inform evaluations of interventions by providing evidence that cannot be obtained or deduced from clinicians' reports or from other biomedical measures. This commentary focuses on how PROs add value to clinical trials by representing the patient voice. We employed 2 previously published descriptive frameworks (addressing how PROs are used in clinical trials and how PROs have an impact, respectively) and selected 9 clinical trial publications that illustrate the value of PROs according to the framework categories. These include 3 trials where PROs were a primary trial endpoint, 3 trials where PROs as secondary endpoints supported the primary endpoint, and 3 trials where PROs as secondary endpoints contrast the primary endpoint findings in clinically important ways. The 9 examples illustrate that PROs add valuable data to the care and treatment context by informing future patients about how they may feel and function on different treatments and by providing clinicians with evidence to support changes to clinical practice and shared decision making. Beyond the patient and clinician, PROs can enable administrators to consider the cost-effectiveness of implementing new interventions and contribute vital information to policy makers, health technology assessors, and regulators. These examples provide a strong case for the wider implementation of PROs in cancer trials.
Abstract
Patient-reported outcomes (PROs) are used in clinical trials to provide evidence of the benefits and risks of interventions from a patient perspective and to inform regulatory decisions and ...health policy. The collection of PROs in routine practice can facilitate monitoring of patient symptoms; identification of unmet needs; prioritisation and/or tailoring of treatment to the needs of individual patients and inform value-based healthcare initiatives. However, respondent burden needs to be carefully considered and addressed to avoid high rates of missing data and poor reporting of PRO results, which may lead to poor quality data for regulatory decision making and/or clinical care.