Hazard ratios can be approximated by data extracted from published Kaplan–Meier
curves. Recently, this curve approach has been extended beyond hazard-ratio
approximation with the capability of ...constructing time-to-event data at the
individual level. In this article, we introduce a command,
ipdfc, to implement the reconstruction method to
convert Kaplan–Meier curves to time-to-event data. We give examples to
illustrate how to use the command.
In observational studies with censored data, exposure‐outcome associations are commonly measured with adjusted hazard ratios from multivariable Cox proportional hazards models. The difference in ...restricted mean survival times (RMSTs) up to a pre‐specified time point is an alternative measure that offers a clinically meaningful interpretation. Several regression‐based methods exist to estimate an adjusted difference in RMSTs, but they digress from the model‐free method of taking the area under the survival function. We derive the adjusted RMST by integrating an adjusted Kaplan‐Meier estimator with inverse probability weighting (IPW). The adjusted difference in RMSTs is the area between the two IPW‐adjusted survival functions. In a Monte Carlo‐type simulation study, we demonstrate that the proposed estimator performs as well as two regression‐based approaches: the ANCOVA‐type method of Tian et al and the pseudo‐observation method of Andersen et al. We illustrate the methods by reexamining the association between total cholesterol and the 10‐year risk of coronary heart disease in the Framingham Heart Study.
In many biomedical studies, the event of interest can occur more than once in a participant. These events are termed recurrent events. However, the majority of analyses focus only on time to the ...first event, ignoring the subsequent events. Several statistical models have been proposed for analysing multiple events. In this paper we explore and illustrate several modelling techniques for analysis of recurrent time-to-event data, including conditional models for multivariate survival data (AG, PWP-TT and PWP-GT), marginal means/rates models, frailty and multi-state models. We also provide a tutorial for analysing such type of data, with three widely used statistical software programmes. Different approaches and software are illustrated using data from a bladder cancer project and from a study on lower respiratory tract infection in children in Brazil. Finally, we make recommendations for modelling strategy selection for analysis of recurrent event data.
Currently available risk prediction methods are limited in their ability to deal with complex, heterogeneous, and longitudinal data such as that available in primary care records, or in their ability ...to deal with multiple competing risks. This paper develops a novel deep learning approach that is able to successfully address current limitations of standard statistical approaches such as land marking and joint modeling. Our approach, which we call Dynamic-DeepHit, flexibly incorporates the available longitudinal data comprising various repeated measurements (rather than only the last available measurements) in order to issue dynamically updated survival predictions for one or multiple competing risk(s). Dynamic-DeepHit learns the time-to-event distributions without the need to make any assumptions about the underlying stochastic models for the longitudinal and the time-to-event processes. Thus, unlike existing works in statistics, our method is able to learn data-driven associations between the longitudinal data and the various associated risks without underlying model specifications. We demonstrate the power of our approach by applying it to a real-world longitudinal dataset from the U.K. Cystic Fibrosis Registry, which includes a heterogeneous cohort of 5883 adult patients with annual follow-ups between 2009 to 2015. The results show that Dynamic-DeepHit provides a drastic improvement in discriminating individual risks of different forms of failures due to cystic fibrosis. Furthermore, our analysis utilizes post-processing statistics that provide clinical insight by measuring the influence of each covariate on risk predictions and the temporal importance of longitudinal measurements, thereby enabling us to identify covariates that are influential for different competing risks.
Many longitudinal studies are designed to monitor participants for major events related to the progression of diseases. Data arising from such longitudinal studies are usually subject to interval ...censoring since the events are only known to occur between two monitoring visits. In this work, we propose a new method to handle interval‐censored multistate data within a proportional hazards model framework where the hazard rate of events is modeled by a nonparametric function of time and the covariates affect the hazard rate proportionally. The main idea of this method is to simplify the likelihood functions of a discrete‐time multistate model through an approximation and the application of data augmentation techniques, where the assumed presence of censored information facilitates a simpler parameterization. Then the expectation‐maximization algorithm is used to estimate the parameters in the model. The performance of the proposed method is evaluated by numerical studies. Finally, the method is employed to analyze a dataset on tracking the advancement of coronary allograft vasculopathy following heart transplantation.
Joint models for longitudinal and survival data (JMLSs) are widely used to investigate the relationship between longitudinal and survival data in clinical trials in recent years. But, the existing ...studies mainly focus on independent survival data. In many clinical trials, survival data may be bivariately correlated. To this end, this paper proposes a novel JMLS accommodating multivariate longitudinal and bivariate correlated time‐to‐event data. Nonparametric marginal survival hazard functions are transformed to bivariate normal random variables. Bayesian penalized splines are employed to approximate unknown baseline hazard functions. Incorporating the Metropolis‐Hastings algorithm into the Gibbs sampler, we develop a Bayesian adaptive Lasso method to simultaneously estimate parameters and baseline hazard functions, and select important predictors in the considered JMLS. Simulation studies and an example taken from the International Breast Cancer Study Group are used to illustrate the proposed methodologies.
The recent 21st Century Cures Act propagates innovations to accelerate the discovery, development, and delivery of 21st century cures. It includes the broader application of Bayesian statistics and ...the use of evidence from clinical expertise. An example of the latter is the use of trial‐external (or historical) data, which promises more efficient or ethical trial designs. We propose a Bayesian meta‐analytic approach to leverage historical data for time‐to‐event endpoints, which are common in oncology and cardiovascular diseases. The approach is based on a robust hierarchical model for piecewise exponential data. It allows for various degrees of between trial‐heterogeneity and for leveraging individual as well as aggregate data. An ovarian carcinoma trial and a non‐small cell cancer trial illustrate methodological and practical aspects of leveraging historical data for the analysis and design of time‐to‐event trials.
Summary
Multiple randomized controlled trials, each comparing a subset of competing interventions, can be synthesized by means of a network meta‐analysis to estimate relative treatment effects ...between all interventions in the evidence base. Here we focus on estimating relative treatment effects for time‐to‐event outcomes. Cancer treatment effectiveness is frequently quantified by analyzing overall survival (OS) and progression‐free survival (PFS). We introduce a method for the joint network meta‐analysis of PFS and OS that is based on a time‐inhomogeneous tri‐state (stable, progression, and death) Markov model where time‐varying transition rates and relative treatment effects are modeled with parametric survival functions or fractional polynomials. The data needed to run these analyses can be extracted directly from published survival curves. We demonstrate use by applying the methodology to a network of trials for the treatment of non‐small‐cell lung cancer. The proposed approach allows the joint synthesis of OS and PFS, relaxes the proportional hazards assumption, extends to a network of more than two treatments, and simplifies the parameterization of decision and cost‐effectiveness analyses.