The progressive censoring plan has achieved significant recognition in recent years. Its generalization, termed as the joint progressive censoring scheme, has also received numerous researchers' ...attention. Mondal and Kundu proposed a balanced two sample type-II progressive censoring scheme, in which one practices life test in order to compare the lifetime of the products, produced from various lines in an identical experimental environment. This study concentrates on the estimation problem of lifetime Lindley distributions under the classical paradigm within the balanced two samples type-II progressive censoring framework. A simulation study has been conducted to evaluate the considered estimation procedures' performance through the mean square error and the average length of the asymptotic confidence interval. The optimal censoring scheme is also formulated based on the criteria that rely on Fisher's information. Finally, for an illustration purpose, a real-life application is presented.
Composite endpoints are very common in clinical research, such as recurrence‐free survival in oncology research, defined as the earliest of either death or disease recurrence. Because of the way data ...are collected in such studies, component‐wise censoring is common, where, for example, recurrence is an interval‐censored event and death is a right‐censored event. However, a common way to analyze such component‐wise censored composite endpoints is to treat them as right‐censored, with the date at which the non‐fatal event was detected serving as the date the event occurred. This approach is known to introduce upward bias when the Kaplan‐Meier estimator is applied, but has more complex impact on semi‐parametric regression approaches. In this article we compare the performance of the Cox model estimators for right‐censored data and the Cox model estimators for interval‐censored data in the context of component‐wise censored data where the visit process differs across levels of a covariate of interest, a common scenario in observational data. We additionally examine estimators of the cause‐specific hazard when applied to the individual components of such component‐wise censored composite endpoints. We found that when visit schedules differed according to levels of a covariate of interest, the Cox model estimators for right‐censored data and the estimators for cause‐specific hazards were increasingly biased as the frequency of visits decreased. The Cox model estimator for interval‐censored data with censoring at the last disease‐free date is recommended for use in the presence of differential visit schedules.
In disease settings where study participants are at risk for death and a serious nonfatal event, composite endpoints defined as the time until the earliest of death or the nonfatal event are often ...used as the primary endpoint in clinical trials. In practice, if the nonfatal event can only be detected at clinic visits and the death time is known exactly, the resulting composite endpoint exhibits “component‐wise censoring.” The standard method used to estimate event‐free survival in this setting fails to account for component‐wise censoring. We apply a kernel smoothing method previously proposed for a marker process in a novel way to produce a nonparametric estimator for event‐free survival that accounts for component‐wise censoring. The key insight that allows us to apply this kernel method is thinking of nonfatal event status as an intermittently observed binary time‐dependent variable rather than thinking of time to the nonfatal event as interval‐censored. We also propose estimators for the probability in state and restricted mean time in state for reversible or irreversible illness‐death models, under component‐wise censoring, and derive their large‐sample properties. We perform a simulation study to compare our method to existing multistate survival methods and apply the methods on data from a large randomized trial studying a multifactor intervention for reducing morbidity and mortality among men at above average risk of coronary heart disease.
Background
Composite time-to-event endpoints are beneficial for assessing related outcomes jointly in clinical trials, but components of the endpoint may have different censoring mechanisms. For ...example, in the PRagmatic EValuation of evENTs And Benefits of Lipid-lowering in oldEr adults (PREVENTABLE) trial, the composite outcome contains one endpoint that is right censored (all-cause mortality) and two endpoints that are interval censored (dementia and persistent disability). Although Cox regression is an established method for time-to-event outcomes, it is unclear how models perform under differing component-wise censoring schemes for large clinical trial data. The goal of this article is to conduct a simulation study to investigate the performance of Cox models under different scenarios for composite endpoints with component-wise censoring.
Methods
We simulated data by varying the strength and direction of the association between treatment and outcome for the two component types, the proportion of events arising from the components of the outcome (right censored and interval censored), and the method for including the interval-censored component in the Cox model (upper value and midpoint of the interval). Under these scenarios, we compared the treatment effect estimate bias, confidence interval coverage, and power.
Results
Based on the simulation study, Cox models generally have adequate power to achieve statistical significance for comparing treatments for composite outcomes with component-wise censoring. In our simulation study, we did not observe substantive bias for scenarios under the null hypothesis or when the treatment has a similar relative effect on each component outcome. Performance was similar regardless of if the upper value or midpoint of the interval-censored part of the composite outcome was used.
Conclusion
Cox regression is a suitable method for analysis of clinical trial data with composite time-to-event endpoints subject to different component-wise censoring mechanisms.
A hybrid censoring scheme is a mixture of Type-I and Type-II censoring schemes. In this review, we first discuss Type-I and Type-II hybrid censoring schemes and associated inferential issues. Next, ...we present details on developments regarding generalized hybrid censoring and unified hybrid censoring schemes that have been introduced in the literature. Hybrid censoring schemes have been adopted in competing risks set-up and in step-stress modeling and these results are outlined next. Recently, two new censoring schemes, viz., progressive hybrid censoring and adaptive progressive censoring schemes have been introduced in the literature. We discuss these censoring schemes and describe inferential methods based on them, and point out their advantages and disadvantages. Determining an optimal hybrid censoring scheme is an important design problem, and we shed some light on this issue as well. Finally, we present some examples to illustrate some of the results described here. Throughout the article, we mention some open problems and suggest some possible future work for the benefit of readers interested in this area of research.
In studies that assess disease status periodically, time of disease onset is interval censored between visits. Participants who die between two visits may have unknown disease status after their last ...visit. In this work, we consider an additional scenario where diagnosis requires two consecutive positive tests, such that disease status can also be unknown at the last visit preceding death. We show that this impacts the choice of censoring time for those who die without an observed disease diagnosis. We investigate two classes of models that quantify the effect of risk factors on disease outcome: a Cox proportional hazards model with death as a competing risk and an illness death model that treats disease as a possible intermediate state. We also consider four censoring strategies: participants without observed disease are censored at death (Cox model only), the last visit, the last visit with a negative test, or the second last visit. We evaluate the performance of model and censoring strategy combinations on simulated data with a binary risk factor and illustrate with a real data application. We find that the illness death model with censoring at the second last visit shows the best performance in all simulation settings. Other combinations show bias that varies in magnitude and direction depending on the differential mortality between diseased and disease‐free subjects, the gap between visits, and the choice of the censoring time.
The win ratio has become a popular method for comparing multiple event data between two groups in clinical cohort studies. The win ratio compares the event data in prioritized order, where the first ...prioritized event is death and a typical example for the second prioritized event is hospitalization. Literature is sparse on inference for win and loss parameters, including the win ratio, for censored event data. Inference for two prioritized censored event times has been developed for independent right‐censoring. Many clinical studies include recurrent event data such as hospitalizations. In this article, we suggest inference for win‐loss parameters for death and a recurrent event outcome under independent right‐censoring. The small sample properties of the proposed method are studied in a simulation study showing that the variance formula is accurate even for small samples. The method is applied on a data set from a randomized clinical trial.
Abstract
In this paper, structural properties of (progressive) hybrid censoring schemes are established by studying the possible data scenarios resulting from the hybrid censoring scheme. The results ...illustrate that the distributions of hybrid censored random variables can be immediately derived from the cases of Type-I and Type-II censored data. Furthermore, it turns out that results in likelihood and Bayesian inference are also obtained directly which explains the similarities present in the probabilistic and statistical analysis of these censoring schemes. The power of the approach is illustrated by applying the approach to the quite complex unified Type-II (progressive) hybrid censoring scheme. Finally, it is shown that the approach is not restricted to (progressively Type-II censored) order statistics and that it can be extended to almost any kind of ordered data.
In this paper we introduce a new type-II progressive censoring scheme for two samples. It is observed that the proposed censoring scheme is analytically more tractable than the existing joint ...progressive type-II censoring scheme proposed by Rasouli and Balakrishnan. The maximum likelihood estimators of the unknown parameters are obtained and their exact distributions are derived. Based on the exact distributions of the maximum likelihood estimators exact confidence intervals are also constructed. For comparison purposes we have used bootstrap confidence intervals also. One data analysis has been performed for illustrative purposes. Finally we propose some open problems.
Censored data make survival analysis more complicated because exact event times are not observed. Statistical methodology developed to account for censored observations assumes that patients’ ...withdrawal from a study is independent of the event of interest. However, in practice, some covariates might be associated to both lifetime and censoring mechanism, inducing dependent censoring. In this case, standard survival techniques, like Kaplan–Meier estimator, give biased results. The inverse probability censoring weighted estimator was developed to correct for bias due to dependent censoring. In this article, we explore the use of inverse probability censoring weighting methodology and describe why it is effective in removing the bias. Since implementing this method is highly time consuming and requires programming and mathematical skills, we propose a user friendly algorithm in R. Applications to a toy example and to a medical data set illustrate how the algorithm works. A simulation study was carried out to investigate the performance of the inverse probability censoring weighted estimators in situations where dependent censoring is present in the data. In the simulation process, different sample sizes, strengths of the censoring model, and percentages of censored individuals were chosen. Results show that in each scenario inverse probability censoring weighting reduces the bias induced in the traditional Kaplan–Meier approach where dependent censoring is ignored.