This study evaluated the association of time in range (TIR) of 70-180 mg/dL (3.9-10 mmol/L) with the development or progression of retinopathy and development of microalbuminuria using the Diabetes ...Control and Complications Trial (DCCT) data set in order to validate the use of TIR as an outcome measure for clinical trials.
In the DCCT, blood glucose concentrations were measured at a central laboratory from seven fingerstick samples (seven-point testing: pre- and 90-min postmeals and at bedtime) collected during 1 day every 3 months. Retinopathy progression was assessed every 6 months and urinary microalbuminuria development every 12 months. Proportional hazards models were used to assess the association of TIR and other glycemic metrics, computed from the seven-point fingerstick data, with the rate of development of microvascular complications.
Mean TIR of seven-point profiles for the 1,440 participants was 41 ± 16%. The hazard rate of development of retinopathy progression was increased by 64% (95% CI 51-78), and development of the microalbuminuria outcome was increased by 40% (95% CI 25-56), for each 10 percentage points lower TIR (
< 0.001 for each). Results were similar for mean glucose and hyperglycemia metrics.
Based on these results, a compelling case can be made that TIR is strongly associated with the risk of microvascular complications and should be an acceptable end point for clinical trials. Although hemoglobin A
remains a valuable outcome metric in clinical trials, TIR and other glycemic metrics-especially when measured with continuous glucose monitoring-add value as outcome measures in many studies.
AbstractObjectivesTo study the impact of blinding on estimated treatment effects, and their variation between trials; differentiating between blinding of patients, healthcare providers, and ...observers; detection bias and performance bias; and types of outcome (the MetaBLIND study).DesignMeta-epidemiological study.Data sourceCochrane Database of Systematic Reviews (2013-14).Eligibility criteria for selecting studiesMeta-analyses with both blinded and non-blinded trials on any topic.Review methodsBlinding status was retrieved from trial publications and authors, and results retrieved automatically from the Cochrane Database of Systematic Reviews. Bayesian hierarchical models estimated the average ratio of odds ratios (ROR), and estimated the increases in heterogeneity between trials, for non-blinded trials (or of unclear status) versus blinded trials. Secondary analyses adjusted for adequacy of concealment of allocation, attrition, and trial size, and explored the association between outcome subjectivity (high, moderate, low) and average bias. An ROR lower than 1 indicated exaggerated effect estimates in trials without blinding.ResultsThe study included 142 meta-analyses (1153 trials). The ROR for lack of blinding of patients was 0.91 (95% credible interval 0.61 to 1.34) in 18 meta-analyses with patient reported outcomes, and 0.98 (0.69 to 1.39) in 14 meta-analyses with outcomes reported by blinded observers. The ROR for lack of blinding of healthcare providers was 1.01 (0.84 to 1.19) in 29 meta-analyses with healthcare provider decision outcomes (eg, readmissions), and 0.97 (0.64 to 1.45) in 13 meta-analyses with outcomes reported by blinded patients or observers. The ROR for lack of blinding of observers was 1.01 (0.86 to 1.18) in 46 meta-analyses with subjective observer reported outcomes, with no clear impact of degree of subjectivity. Information was insufficient to determine whether lack of blinding was associated with increased heterogeneity between trials. The ROR for trials not reported as double blind versus those that were double blind was 1.02 (0.90 to 1.13) in 74 meta-analyses.ConclusionNo evidence was found for an average difference in estimated treatment effect between trials with and without blinded patients, healthcare providers, or outcome assessors. These results could reflect that blinding is less important than often believed or meta-epidemiological study limitations, such as residual confounding or imprecision. At this stage, replication of this study is suggested and blinding should remain a methodological safeguard in trials.
The US National Cancer Institute (NCI), in collaboration with scientists representing multiple areas of expertise relevant to 'omics'-based test development, has developed a checklist of criteria ...that can be used to determine the readiness of omics-based tests for guiding patient care in clinical trials. The checklist criteria cover issues relating to specimens, assays, mathematical modelling, clinical trial design, and ethical, legal and regulatory aspects. Funding bodies and journals are encouraged to consider the checklist, which they may find useful for assessing study quality and evidence strength. The checklist will be used to evaluate proposals for NCI-sponsored clinical trials in which omics tests will be used to guide therapy.
Christian Lienhardt and co-authors discuss the conclusions of the PLOS Medicine Collection on advances in clinical trial design for development of new tuberculosis treatments.
High quality protocols facilitate proper conduct, reporting, and external review of clinical trials. However, the completeness of trial protocols is often inadequate. To help improve the content and ...quality of protocols, an international group of stakeholders developed the SPIRIT 2013 Statement (Standard Protocol Items: Recommendations for Interventional Trials). The SPIRIT Statement provides guidance in the form of a checklist of recommended items to include in a clinical trial protocol. This SPIRIT 2013 Explanation and Elaboration paper provides important information to promote full understanding of the checklist recommendations. For each checklist item, we provide a rationale and detailed description; a model example from an actual protocol; and relevant references supporting its importance. We strongly recommend that this explanatory paper be used in conjunction with the SPIRIT Statement. A website of resources is also available (www.spirit-statement.org). The SPIRIT 2013 Explanation and Elaboration paper, together with the Statement, should help with the drafting of trial protocols. Complete documentation of key trial elements can facilitate transparency and protocol review for the benefit of all stakeholders.
The clinical research enterprise is not producing the evidence decision makers arguably need in a timely and cost effective manner; research currently involves the use of labor-intensive parallel ...systems that are separate from clinical care. The emergence of pragmatic clinical trials (PCTs) poses a possible solution: these large-scale trials are embedded within routine clinical care and often involve cluster randomization of hospitals, clinics, primary care providers, etc. Interventions can be implemented by health system personnel through usual communication channels and quality improvement infrastructure, and data collected as part of routine clinical care. However, experience with these trials is nascent and best practices regarding design operational, analytic, and reporting methodologies are undeveloped.
To strengthen the national capacity to implement cost-effective, large-scale PCTs, the Common Fund of the National Institutes of Health created the Health Care Systems Research Collaboratory (Collaboratory) to support the design, execution, and dissemination of a series of demonstration projects using a pragmatic research design.
In this article, we will describe the Collaboratory, highlight some of the challenges encountered and solutions developed thus far, and discuss remaining barriers and opportunities for large-scale evidence generation using PCTs.
A planning phase is critical, and even with careful planning, new challenges arise during execution; comparisons between arms can be complicated by unanticipated changes. Early and ongoing engagement with both health care system leaders and front-line clinicians is critical for success. There is also marked uncertainty when applying existing ethical and regulatory frameworks to PCTS, and using existing electronic health records for data capture adds complexity.
Failure to report the results of a clinical trial can distort the evidence base for clinical practice, breaches researchers' ethical obligations to participants, and represents an important source of ...research waste. The Food and Drug Administration Amendments Act (FDAAA) of 2007 now requires sponsors of applicable trials to report their results directly onto ClinicalTrials.gov within 1 year of completion. The first trials covered by the Final Rule of this act became due to report results in January, 2018. In this cohort study, we set out to assess compliance.
We downloaded data for all registered trials on ClinicalTrials.gov each month from March, 2018, to September, 2019. All cross-sectional analyses in this manuscript were performed on data extracted from ClinicalTrials.gov on Sept 16, 2019; monthly trends analysis used archived data closest to the 15th day of each month from March, 2018, to September, 2019. Our study cohort included all applicable trials due to report results under FDAAA. We excluded all non-applicable trials, those not yet due to report, and those given a certificate allowing for delayed reporting. A trial was considered reported if results had been submitted and were either publicly available, or undergoing quality control review at ClinicalTrials.gov. A trial was considered compliant if these results were submitted within 1 year of the primary completion date, as required by the legislation. We described compliance with the FDAAA 2007 Final Rule, assessed trial characteristics associated with results reporting using logistic regression models, described sponsor-level reporting, examined trends in reporting, and described time-to-report using the Kaplan-Meier method.
4209 trials were due to report results; 1722 (40·9%; 95% CI 39·4–42·2) did so within the 1-year deadline. 2686 (63·8%; 62·4–65·3) trials had results submitted at any time. Compliance has not improved since July, 2018. Industry sponsors were significantly more likely to be compliant than non-industry, non-US Government sponsors (odds ratio OR 3·08 95% CI 2·52–3·77), and sponsors running large numbers of trials were significantly more likely to be compliant than smaller sponsors (OR 11·84 9·36–14·99). The median delay from primary completion date to submission date was 424 days (95% CI 412–435), 59 days higher than the legal reporting requirement of 1 year.
Compliance with the FDAAA 2007 is poor, and not improving. To our knowledge, this is the first study to fully assess compliance with the Final Rule of the FDAAA 2007. Poor compliance is likely to reflect lack of enforcement by regulators. Effective enforcement and action from sponsors is needed; until then, open public audit of compliance for each individual sponsor may help. We will maintain updated compliance data for each individual sponsor and trial at fdaaa.trialstracker.net.
Laura and John Arnold Foundation.
The number and diversity of cancer therapeutics in the pipeline has increased over the past decade due to an enhanced understanding of cancer biology and the identification of novel therapeutic ...targets. At the same time, the cost of bringing new drugs to market and the regulatory burdens associated with clinical drug development have progressively increased. The finite number of eligible patients and limited financial resources available to evaluate promising new therapeutics represent rate-limiting factors in the effort to translate preclinical discoveries into the next generation of standard therapeutic approaches. Optimal use of resources requires understanding and ultimately addressing inefficiencies in the cancer clinical trials system. Prior analyses have demonstrated that a large proportion of trials initiated by the National Cancer Institute (NCI) Cooperative Group system are never completed. While NCI Cooperative Group trials are important, they represent only a small proportion of all cancer clinical trials performed. Herein, we explore the problem of cancer clinical trials that fail to complete within the broader cancer clinical trials enterprise. Among 7776 phase II-III adult cancer clinical trials initiated between 2005-2011, we found a seven-year cumulative incidence of failure to complete of approximately 20% (95% confidence interval = 18% to 22%). Nearly 48000 patients were enrolled in trials that failed to complete. These trials likely contribute little to the scientific knowledge base, divert resources and patients from answering other critical questions, and represent a barrier to progress.
Conventionally, evaluation of a new drug, A, is done in three phases. Phase I is based on toxicity to determine a “maximum tolerable dose” (MTD) of A, phase II is conducted to decide whether A at the ...MTD is promising in terms of response probability, and if so a large randomized phase III trial is conducted to compare A to a control treatment, C, usually based on survival time or progression free survival time. It is widely recognized that this paradigm has many flaws. A recent approach combines the first two phases by conducting a phase I‐II trial, which chooses an optimal dose based on both efficacy and toxicity, and evaluation of A at the selected optimal phase I‐II dose then is done in a phase III trial. This paper proposes a new design paradigm, motivated by the possibility that the optimal phase I‐II dose may not maximize mean survival time with A. We propose a hybridized design, which we call phase I‐II/III, that combines phase I‐II and phase III by allowing the chosen optimal phase I‐II dose of A to be re‐optimized based on survival time data from phase I‐II patients and the first portion of phase III. The phase I‐II/III design uses adaptive randomization in phase I‐II, and relies on a mixture model for the survival time distribution as a function of efficacy, toxicity, and dose. A simulation study is presented to evaluate the phase I‐II/III design and compare it to the usual approach that does not re‐optimize the dose of A in phase III.