In this article, I extend the use of probability of success calculations, previously developed for fixed sample size studies to group sequential designs (GSDs) both for studies planned to be analyzed ...by standard frequentist techniques or Bayesian approaches. The structure of GSDs lends itself to sequential learning which in turn allows us to consider how knowledge about the result of an interim analysis can influence our assessment of the study's probability of success. In this article, I build on work by Temple and Robertson who introduced the idea of conditional probability of success, an idea which I also treated in a recent monograph.
When statisticians are uncertain as to which parametric statistical model to use to analyse experimental data, they will often resort to a non-parametric approach. The purpose of this paper is to ...provide insight into a simple approach to take when it is unclear as to the appropriate parametric model and plan to conduct a Bayesian analysis. I introduce an approximate, or substitution likelihood, first proposed by Harold Jeffreys in 1939 and show how to implement the approach combined with both a non-informative and an informative prior to provide a random sample from the posterior distribution of the median of the unknown distribution. The first example I use to demonstrate the approach is a within-patient bioequivalence design and then show how to extend the approach to a parallel group design.
The use of Bayesian approaches in the regulated world of pharmaceutical drug development has not been without its difficulties or its critics. The recent Food and Drug Administration regulatory ...guidance on the use of Bayesian approaches in device submissions has mandated an investigation into the operating characteristics of Bayesian approaches and has suggested how to make adjustments in order that the proposed approaches are in a sense calibrated. In this paper, I present examples of frequentist calibration of Bayesian procedures and argue that we need not necessarily aim for perfect calibration but should be allowed to use procedures, which are well-calibrated, a position supported by the guidance.
The continual reassessment method (CRM) is a model-based design for phase I trials, which aims to find the maximum tolerated dose (MTD) of a new therapy. The CRM has been shown to be more accurate in ...targeting the MTD than traditional rule-based approaches such as the 3 + 3 design, which is used in most phase I trials. Furthermore, the CRM has been shown to assign more trial participants at or close to the MTD than the 3 + 3 design. However, the CRM's uptake in clinical research has been incredibly slow, putting trial participants, drug development and patients at risk. Barriers to increasing the use of the CRM have been identified, most notably a lack of knowledge amongst clinicians and statisticians on how to apply new designs in practice. No recent tutorial, guidelines, or recommendations for clinicians on conducting dose-finding studies using the CRM are available. Furthermore, practical resources to support clinicians considering the CRM for their trials are scarce.
To help overcome these barriers, we present a structured framework for designing a dose-finding study using the CRM. We give recommendations for key design parameters and advise on conducting pre-trial simulation work to tailor the design to a specific trial. We provide practical tools to support clinicians and statisticians, including software recommendations, and template text and tables that can be edited and inserted into a trial protocol. We also give guidance on how to conduct and report dose-finding studies using the CRM.
An initial set of design recommendations are provided to kick-start the design process. To complement these and the additional resources, we describe two published dose-finding trials that used the CRM. We discuss their designs, how they were conducted and analysed, and compare them to what would have happened under a 3 + 3 design.
The framework and resources we provide are aimed at clinicians and statisticians new to the CRM design. Provision of key resources in this contemporary guidance paper will hopefully improve the uptake of the CRM in phase I dose-finding trials.
Estimates of risk of stroke recurrence are widely variable and focused on the short- term. A systematic review and meta-analysis was conducted to estimate the pooled cumulative risk of stroke ...recurrence.
Studies reporting cumulative risk of recurrence after first-ever stroke were identified using electronic databases and by manually searching relevant journals and conference abstracts. Overall cumulative risks of stroke recurrence at 30 days and 1, 5, and 10 years after first stroke were calculated, and analyses for heterogeneity were conducted. A Weibull model was fitted to the risk of stroke recurrence of the individual studies and pooled estimates were calculated with 95% CI.
Sixteen studies were identified, of which 13 studies reported cumulative risk of stroke recurrence in 9115 survivors. The pooled cumulative risk was 3.1% (95% CI, 1.7-4.4) at 30 days, 11.1% (95% CI, 9.0-13.3) at 1 year, 26.4% (95% CI, 20.1-32.8) at 5 years, and 39.2% (95% CI, 27.2-51.2) at 10 years after initial stroke. Substantial heterogeneity was found at all time points. This study also demonstrates a temporal reduction in 5-year risk of stroke recurrence from 32% to 16.2% across the studies.
The cumulative risk of recurrence varies greatly up to 10 years. This may be explained by differences in case mix and changes in secondary prevention over time However, methodological differences are likely to play an important role and consensus on definitions would improve future comparability of estimates and characterization of groups of stroke survivors at increased risk of recurrence.
Although poor air quality can have a negative impact on human health, studies have shown suboptimal levels of adherence to health advice associated with air quality alerts. The present study compared ...the behavioural impact of the UK Air Quality Index (DAQI) with an alternative message format, using a 2 (general population vs. at-risk individuals) X 2 (usual DAQI messages vs. behaviourally enhanced messages) factorial design. Messages were sent via a smartphone application. Eighty-two participants were randomly allocated to the experimental groups. It was found that the enhanced messages (targeting messages specificity and psychosocial predictors of behaviour change) increased intentions to make permanent behavioural changes to reduce exposure, compared to the control group (V = 0.23). This effect was mediated by a reduced perception of not having enough time to follow the health advice received (b = −0.769, BCa CI −2.588, 0.533). It was also found that higher worry about air pollution, perceived severity, perceived efficacy of the recommended behaviour and self-efficacy were predictive of self-reported behaviour change at four weeks. In response to a real moderate air quality alert, among those with a pre-existing lung condition, more respondents in the intervention group reported to have used their preventer inhaler compared to the control group (V = 0.49).
On the other hand, the two message formats performed similarly when intentions were collected in relation to a hypothetical high air pollution scenario, with all groups showing relatively high intentions to change behaviours. This study expands the currently limited understanding of how to improve the behavioural impact of existing air quality alerts.
•Behaviourally enhanced health advice was compared with the UK DAQI advice.•The enhanced messages targeted specificity and psychosocial predictors of adherence.•Those who received the enhanced messages considered making permanent changes.•Specific psychosocial factors were predictive of behaviour change.•The enhanced messages increased preventer inhaler use.
How to test hypotheses if you must Grieve, Andrew P
Pharmaceutical statistics : the journal of the pharmaceutical industry,
March/April 2015, Letnik:
14, Številka:
2
Journal Article
Recenzirano
Drug development is not the only industrial-scientific enterprise subject to government regulations. In some fields of ecology and environmental sciences, the application of statistical methods is ...also regulated by ordinance. Over the past 20years, ecologists and environmental scientists have argued against an unthinking application of null hypothesis significance tests. More recently, Canadian ecologists have suggested a new approach to significance testing, taking account of the costs of both type I and type II errors. In this paper, we investigate the implications of this for testing in drug development and demonstrate that its adoption leads directly to the likelihood principle and Bayesian approaches.
The goal of clinical trial research is to deliver safe and efficacious new treatments to patients in need in a timely and cost-effective manner. There is precedent in using historical control data to ...reduce the number of concurrent control subjects required in developing medicines for rare diseases and other areas of unmet need. The purpose of this paper is to provide a review for a regulatory and industry audience of the current state of relevant statistical methods, and of the uptake of these approaches and the opportunities for broader use of historical data in confirmatory clinical trials. General principles to consider when incorporating historical control data in a new trial are presented. Bayesian and frequentist approaches are outlined including how the operating characteristics for such a trial can be obtained. Finally, examples of approved new treatments that incorporated historical controls in their confirmatory trials are presented.
Necrotising fasciitis is a rapidly progressing soft-tissue infection with a low incidence that carries a relevant risk of morbidity and mortality. Although necrotising fasciitis is often fatal in ...adults, its case fatality rate seems to be lower in children. A highly variable clinical presentation makes the diagnosis challenging, which often results in misdiagnosis and time-delay to therapy.
We conducted a protocol-based systematic review to identify specific features of necrotising fasciitis in children aged one month to 17 years. We searched 'PubMed', 'Web of Science' and 'SCOPUS' for relevant literature. Primary outcomes were incidence and case fatality rates in population-based studies, and skin symptoms on presentation. We also assessed signs of systemic illness, causative organisms, predisposing factors, and reconstructive procedures as secondary outcomes.
We included five studies reporting incidence and case fatality rates, two case-control studies, and 298 cases from 195 reports. Incidence rates varied between 0.022 and 0.843 per 100,000 children per year with a case-fatality rate ranging from 0% to 14.3%. The most frequent skin symptoms were erythema (58.7%; 175/298) and swelling (48%; 143/298), whereas all other symptoms occurred in less than 50% of cases. The majority of cases had fever (76.7%; 188/245), but other signs of systemic illness were present in less than half of the cohort. Group-A streptococci accounted for 44.8% (132/298) followed by Gram-negative rods in 29.8% (88/295), while polymicrobial infections occurred in 17.3% (51/295). Extremities were affected in 45.6% (136/298), of which 73.5% (100/136) occurred in the lower extremities. Skin grafts were necessary in 51.6% (84/162) of the pooled cases, while flaps were seldom used (10.5%; 17/162). The vast majority of included reports originate from developed countries.
Clinical suspicion remains the key to diagnose necrotising fasciitis. A combination of swelling, pain, erythema, and a systemic inflammatory response syndrome might indicate necrotising fasciitis. Incidence and case-fatality rates in children are much smaller than in adults, although there seems to be a relevant risk of morbidity indicated by the high percentage of skin grafts. Systematic multi-institutional research efforts are necessary to improve early diagnosis on necrotising fasciits.