Purpose
There is a need to develop hybrid trial methodology combining the best parts of traditional randomized controlled trials (RCTs) and observational study designs to produce real‐world evidence ...(RWE) that provides adequate scientific evidence for regulatory decision‐making.
Methods
This review explores how hybrid study designs that include features of RCTs and studies with real‐world data (RWD) can combine the advantages of both to generate RWE that is fit for regulatory purposes.
Results
Some hybrid designs include randomization and use pragmatic outcomes; other designs use single‐arm trial data supplemented with external comparators derived from RWD or leverage novel data collection approaches to capture long‐term outcomes in a real‐world setting. Some of these approaches have already been successfully used in regulatory decisions, raising the possibility that studies using RWD could increasingly be used to augment or replace traditional RCTs for the demonstration of drug effectiveness in certain contexts. These changes come against a background of long reliance on RCTs for regulatory decision‐making, which are labor‐intensive, costly, and produce data that can have limited applicability in real‐world clinical practice.
Conclusions
While RWE from observational studies is well accepted for satisfying postapproval safety monitoring requirements, it has not commonly been used to demonstrate drug effectiveness for regulatory purposes. However, this position is changing as regulatory opinions, guidance frameworks, and RWD methodologies are evolving, with growing recognition of the value of using RWE that is acceptable for regulatory decision‐making.
Recognizing the growing need for robust evidence about treatment effectiveness in real-world populations, the Good Research for Comparative Effectiveness (GRACE) guidelines have been developed for ...noninterventional studies of comparative effectiveness to determine which studies are sufficiently rigorous to be reliable enough for use in health technology assessments.
To evaluate which aspects of the GRACE Checklist contribute most strongly to recognition of quality.
We assembled 28 observational comparative effectiveness articles published from 2001 to 2010 that compared treatment effectiveness and/or safety of drugs, medical devices, and medical procedures. Twenty-two volunteers from academia, pharmaceutical companies, and government agencies applied the GRACE Checklist to those articles, providing 56 assessments. Ten senior academic and industry experts provided assessments of overall article quality for the purpose of decision support. We also rated each article based on the number of annual citations and impact factor of the journal in which the article was published. To identify checklist items that were most predictive of quality, classification and regression tree (CART) analysis, a binary, recursive, partitioning methodology, was used to create 3 decision trees, which compared the 56 article assessments with 3 external quality outcomes: (1) expert assessment of overall quality, (2) citation frequency, and (3) impact factor. A fourth tree looked at the composite outcome of all 3 quality indicators.
The best predictors of quality included the following: use of concurrent comparators, limiting the study to new initiators of the study drug, equivalent measurement of outcomes in study groups, collecting data on most if not all known confounders or effect modifiers, accounting for immortal time bias in the analysis, and use of sensitivity analyses to test how much effect estimates depended on various assumptions. Only sensitivity analyses appeared consistently as a predictor of quality in all 4 trees. When a composite outcome of the 3 quality measures was used, the GRACE Checklist showed high sensitivity and specificity (71.43% and 80.95%, respectively).
The GRACE Checklist stands out from other consensus-driven and expert guidance documents because of its extensive validation efforts. This most recent work shows that the checklist has strong sensitivity and specificity, increasing its utility as a screening tool to identify high-quality observational comparative effectiveness research worthy of in-depth review and applicability for decision support.
No outside funding supported this research. All authors are full-time employees of Quintiles, which provides research and consulting services to the biopharmaceutical industry. The authors have no other disclosures to report. Two of the 3 CART trees were presented at the International Society of Pharmacepidemiology in 2015 ("Article Citations per Year" and "Journal Impact Factor"). The original validation study was published in the March 2014 issue of the Journal of Managed Care & Specialty Pharmacy. The checklist questions and scoring were included using a table that was originally published by this journal in 2014. Study concept and design were primarily contributed by Dreyer and Velentgas, along with Bryant. Bryant took the lead in data collection and analysis, along with Dreyer and Velentgas, and data interpretation was performed by Dreyer, Velentgas, and Bryant. The manuscript was written and revised primarily by Dreyer, along with Bryant and Velentgas.
There is growing interest in regulatory use of randomized pragmatic trials and noninterventional real-world (RW) studies of effectiveness and safety, but there is no agreed-on framework for assessing ...when this type of evidence is sufficiently reliable. Rather than impose a clinical trial–like paradigm on RW evidence, like blinded treatments or complete, source-verified data, the framework for assessing the utility of RW evidence should be grounded in the context of specific study objectives, clinical events that are likely to be detected in routine care, and the extent to which systematic error (bias) is likely to impact effect estimation. Whether treatment is blinded should depend on how well the outcome can be measured objectively. Qualification of a data source should be based on (1) numbers of patients of interest available for study; (2) if “must-have” data are likely to be recorded, and if so, how and where; (3) the accessibility of systematic follow-up data for the time period of interest; and (4) the potential for systematic errors (bias) in data collection and the likely magnitude of any such bias. Accessible data may not be representative of an entire population, but still may provide reliable evidence about the experience of typical patients treated under conditions of conventional care. Similarly, RW data that falls short of optimal length of follow-up or study size may still be useful in terms of its ability to provide evidence for regulators for subgroups of special interest. Developing a framework to qualify RW evidence in the context of a particular study purpose and data asset will enable broader regulatory use of RW data for approval of new molecular entities and label changes. Reliable information about diverse populations and settings should also help us move closer to more affordable, effective health care.
Registries for robust evidence Dreyer, Nancy A; Garner, Sarah
JAMA : the journal of the American Medical Association,
2009-Aug-19, Letnik:
302, Številka:
7
Journal Article
Randomized clinical trials (RCTs) are the gold standard in producing clinical evidence of efficacy and safety of medical interventions. More recently, a new paradigm is emerging—specifically within ...the context of preauthorization regulatory decision‐making—for some novel uses of real‐world evidence (RWE) from a variety of real‐world data (RWD) sources to answer certain clinical questions. Traditionally reserved for rare diseases and other special circumstances, external controls (eg, historical controls) are recognized as a possible type of control arm for single‐arm trials. However, creating and analyzing an external control arm using RWD can be challenging since design and analytics may not fully control for all systematic differences (biases). Nonetheless, certain biases can be attenuated using appropriate design and analytical approaches. The main objective of this paper is to improve the scientific rigor in the generation of external control arms using RWD. Here we (a) discuss the rationale and regulatory circumstances appropriate for external control arms, (b) define different types of external control arms, and (c) describe study design elements and approaches to mitigate certain biases in external control arms. This manuscript received endorsement from the International Society for Pharmacoepidemiology (ISPE).
Person-generated health data (PGHD) are valuable to study outcomes relevant to everyday living, to obtain information not otherwise available, for long-term follow-up and in situations where ...decisions cannot wait for traditional clinical research to be completed. While there is no dispute that these data are subject to bias, insights gained may be better than an information void, provided the biases are understood and acknowledged. People will share information known uniquely to them about exposures that may affect drug tolerance, safety and effectiveness, e.g., using non-prescription and complementary medications, alcohol, tobacco, illicit drugs, exercise, etc. Patients may be the best source of safety information when long-term follow-up is needed, e.g., the 5-15-year follow-up required for some gene therapies. Validation studies must be performed to evaluate what people can accurately report and when supplementary confirmation information is needed. But PGHD has already proven valuable in quantifying and contrasting COVID-19 vaccine benefits and risks, and for evaluating disease transmission and the accuracy of COVID-19 testing. Going forward, PGHD will be used for patient-measured and patient-relevant outcomes, including regulatory purposes, and will be linked to broader health data networks using tokenization, becoming a mainstay for signals about risks and benefits for diverse populations.
Background:
Ankle sprains are one of the most common injuries in basketball. Despite this, the incidence and setting of ankle sprains among elite basketball players are not well described.
Purpose:
...To describe the epidemiology of ankle sprains among National Basketball Association (NBA) players.
Study Design:
Cohort study; Level of evidence, 3.
Methods:
All players on an NBA roster for ≥1 NBA game (preseason, regular season, or playoffs) during the 2013-14 through 2016-17 seasons were included. Data were collected with the NBA electronic medical record system. All NBA teams used the electronic medical record continuously throughout the study period to record comprehensive injury data, including onset, mechanism, setting, type, and time lost. Game incidence rates were calculated per 1000 player-games and per 10,000 player-minutes of participation, stratified by demographic and playing characteristics.
Results:
There were 796 ankle sprains among 389 players and 2341 unique NBA player-seasons reported in the league from 2013-14 through 2016-17. The overall single-season risk of ankle sprain was 25.8% (95% CI, 23.9%-28.0%). The majority of ankle sprains occurred in games (n = 565, 71.0%) and involved a contact mechanism of injury (n = 567, 71.2%). Most ankle sprains were lateral (n = 638, 80.2%). The incidence of ankle sprain among players with a history of prior ankle sprain in the past year was 1.41 times (95% CI, 1.13-1.74) the incidence of those without a history of ankle sprain in the past year (P = .002). Fifty-six percent of ankle sprains did not result in any NBA games missed (n = 443); among those that did, players missed a median of 2 games (interquartile range, 1-4) resulting in a cumulative total of 1467 missed player-games over the 4-season study period.
Conclusion:
Ankle sprains affect approximately 26% of NBA players on average each season and account for a large number of missed NBA games in aggregate. Younger players and players with a history of ankle sprain have elevated rates of incident ankle sprains in games, highlighting the potential benefit for integrating injury prevention programs into the management of initial sprains. Research on basketball- and ankle-specific injury prevention strategies could provide benefits.
Concerns regarding both the limited generalizability and the slow pace of traditional randomized trials have led to calls for greater use of real‐world evidence (RWE) in the evaluation of new ...treatments or products. RWE studies often rely on real‐world data (RWD), including data extracted from healthcare records or data captured by mobile phones or other consumer devices. Global assessments of RWD sources are not helpful in assessing whether any specific RWD element is fit for any specific purpose. Instead, evidence generators and evidence consumers should clearly identify the specific health state or clinical phenomenon of interest and then consider each step between that clinical phenomenon and its representation in a research database. We propose specific questions regarding potential error or bias affecting each of those steps: Would a person experiencing this clinical phenomenon present for care in this setting or interact with this recording device? Would this clinical phenomenon be accurately recognized or assessed? How might the recording environment or tools affect accurate and consistent recording of this clinical phenomenon? Can data elements from different sources be harmonized, both technically (same format) and semantically (same meaning)? Can the original data elements be consistently reduced to a useful clinical phenotype? Addressing these questions requires a range of clinical, organizational, and technical expertise. Transparency regarding each step in the creation of RWD is essential if evidence consumers are to rely on RWE studies.