The standard estimator for the cause-specific cumulative incidence function in a competing risks setting with left truncated and/or right censored data can be written in two alternative forms. One is ...a weighted empirical cumulative distribution function and the other a product-limit estimator. This equivalence suggests an alternative view of the analysis of time-to-event data with left truncation and right censoring: individuals who are still at risk or experienced an earlier competing event receive weights from the censoring and truncation mechanisms. As a consequence, inference on the cumulative scale can be performed using weighted versions of standard procedures. This holds for estimation of the cause-specific cumulative incidence function as well as for estimation of the regression parameters in the Fine and Gray proportional subdistribution hazards model. We show that, with the appropriate filtration, a martingale property holds that allows deriving asymptotic results for the proportional subdistribution hazards model in the same way as for the standard Cox proportional hazards model. Estimation of the cause-specific cumulative incidence function and regression on the subdistribution hazard can be performed using standard software for survival analysis if the software allows for inclusion of time-dependent weights. We show the implementation in the R statistical package. The proportional subdistribution hazards model is used to investigate the effect of calendar period as a deterministic external time varying covariate, which can be seen as a special case of left truncation, on AIDS related and non-AIDS related cumulative mortality.
In studies of all-cause mortality, the fundamental epidemiological concepts of rate and risk are connected through a well-defined one-to-one relation. An important consequence of this relation is ...that regression models such as the proportional hazards model that are defined through the hazard (the rate) immediately dictate how the covariates relate to the survival function (the risk).
This introductory paper reviews the concepts of rate and risk and their one-to-one relation in all-cause mortality studies and introduces the analogous concepts of rate and risk in the context of competing risks, the cause-specific hazard and the cause-specific cumulative incidence function.
The key feature of competing risks is that the one-to-one correspondence between cause-specific hazard and cumulative incidence, between rate and risk, is lost. This fact has two important implications. First, the naïve Kaplan-Meier that takes the competing events as censored observations, is biased. Secondly, the way in which covariates are associated with the cause-specific hazards may not coincide with the way these covariates are associated with the cumulative incidence. An example with relapse and non-relapse mortality as competing risks in a stem cell transplantation study is used for illustration.
The two implications of the loss of one-to-one correspondence between cause-specific hazard and cumulative incidence should be kept in mind when deciding on how to make inference in a competing risks situation.
Quarantine length for individuals who have been at risk for infection with SARS‐CoV‐2 has been based on estimates of the incubation time distribution. The time of infection is often not known ...exactly, yielding data with an interval censored time origin. We give a detailed account of the data structure, likelihood formulation and assumptions usually made in the literature: (i) the risk of infection is assumed constant on the exposure window and (ii) the incubation time follows a specific parametric distribution. The impact of these assumptions remains unclear, especially for the right tail of the distribution which informs quarantine policy. We quantified bias in percentiles by means of simulation studies that mimic reality as close as possible. If assumption (i) is not correct, then median and upper percentiles are affected similarly, whereas misspecification of the parametric approach (ii) mainly affects upper percentiles. The latter may yield considerable bias. We suggest a semiparametric method that provides more robust estimates without the need of a parametric choice. Additionally, we used a simulation study to evaluate a method that has been suggested if all infection times are left censored. It assumes that the width of the interval from infection to latest possible exposure follows a uniform distribution. This assumption gave biased results in the exponential phase of an outbreak. Our application to open source data suggests that focus should be on the level of information in the observations, as expressed by the width of exposure windows, rather than the number of observations.
Estimation of the SARS-CoV-2 incubation time distribution is hampered by incomplete data about infection. We discuss two biases that may result from incorrect handling of such data. We performed ...simulations and provide a literature review to investigate the amount of bias in estimated percentiles of the SARS-CoV-2 incubation time distribution. Depending on the rate of differential recall, restricting the analysis to a subset of narrow exposure windows resulted in underestimation in the median and even more in the 95th percentile. Failing to account for left truncation led to an overestimation of multiple days in both the median and the 95th percentile. We examined two overlooked sources of bias concerning exposure information that the researcher engaged in incubation time estimation needs to be aware of.
To evaluate the long-term risk for validated symptomatic cardiac events (CEs) and associated risk factors in childhood cancer survivors (CCSs).
We determined CEs grade 3 or higher: congestive heart ...failure (CHF), cardiac ischemia, valvular disease, arrhythmia and/or pericarditis (according to Common Terminology Criteria for Adverse Events CTCAE, version 3.0) in a hospital-based cohort of 1,362 5-year CCSs diagnosed between 1966 and 1996. We calculated both marginal and cause-specific cumulative incidence of CEs and cause-specific cumulative incidence of separate events. We analyzed different risk factors in multivariable Cox regression models.
Overall, 50 CEs, including 27 cases of CHF, were observed in 42 survivors (at a median attained age of 27.1 years). The 30-year cause-specific cumulative incidence of CEs was significantly increased after treatment with both anthracyclines and cardiac irradiation (12.6%; 95% CI, 4.3% to 20.3%), after anthracyclines (7.3%; 95% CI, 3.8% to 10.7%), and after cardiac irradiation (4.0%; 95% CI, 0.5% to 7.4%) compared with other treatments. In the proportional hazards analyses, anthracycline (dose), cardiac irradiation (dose), combination of these treatments, and congenital heart disease were significantly associated with developing a CE. We demonstrated an exponential relationship between the cumulative anthracycline dose, cardiac irradiation dose, and risk of CE.
CCSs have a high risk of developing symptomatic CEs at an early age. The most common CE was CHF. Survivors treated with both anthracyclines and radiotherapy have the highest risk; after 30 years, one in eight will develop severe heart disease. The use of potentially cardiotoxic treatments should be reconsidered for high-risk groups, and frequent follow-up for high-risk survivors is needed.
Summary Background The CD4 cell count at which combination antiretroviral therapy should be started is a central, unresolved issue in the care of HIV-1-infected patients. In the absence of randomised ...trials, we examined this question in prospective cohort studies. Methods We analysed data from 18 cohort studies of patients with HIV. Antiretroviral-naive patients from 15 of these studies were eligible for inclusion if they had started combination antiretroviral therapy (while AIDS-free, with a CD4 cell count less than 550 cells per μL, and with no history of injecting drug use) on or after Jan 1, 1998. We used data from patients followed up in seven of the cohorts in the era before the introduction of combination therapy (1989–95) to estimate distributions of lead times (from the first CD4 cell count measurement in an upper range to the upper threshold of a lower range) and unseen AIDS and death events (occurring before the upper threshold of a lower CD4 cell count range is reached) in the absence of treatment. These estimations were used to impute completed datasets in which lead times and unseen AIDS and death events were added to data for treated patients in deferred therapy groups. We compared the effect of deferred initiation of combination therapy with immediate initiation on rates of AIDS and death, and on death alone, in adjacent CD4 cell count ranges of width 100 cells per μL. Findings Data were obtained for 21 247 patients who were followed up during the era before the introduction of combination therapy and 24 444 patients who were followed up from the start of treatment. Deferring combination therapy until a CD4 cell count of 251–350 cells per μL was associated with higher rates of AIDS and death than starting therapy in the range 351–450 cells per μL (hazard ratio HR 1·28, 95% CI 1·04–1·57). The adverse effect of deferring treatment increased with decreasing CD4 cell count threshold. Deferred initiation of combination therapy was also associated with higher mortality rates, although effects on mortality were less marked than effects on AIDS and death (HR 1·13, 0·80–1·60, for deferred initiation of treatment at CD4 cell count 251–350 cells per μL compared with initiation at 351–450 cells per μL). Interpretation Our results suggest that 350 cells per μL should be the minimum threshold for initiation of antiretroviral therapy, and should help to guide physicians and patients in deciding when to start treatment. Funding UK Medical Research Council.
Data concerning intensive care unit (ICU)-acquired bacterial colonization and infections are scarce from low and middle-income countries (LMICs). ICU patients in these settings are at high risk of ...becoming colonized and infected with antimicrobial-resistant organisms (AROs). We conducted a prospective observational study at the Ho Chi Minh City Hospital for Tropical Diseases, Vietnam from November 2014 to January 2016 to assess the ICU-acquired colonization and infections, focusing on the five major pathogens in our setting: Staphylococcus aureus (S. aureus), Escherichia coli (E. coli), Klebsiella spp., Pseudomonas spp. and Acinetobacter spp., among adult patients with more than 48 hours of ICU stay. We found that 61.3% (223/364) of ICU patients became colonized with AROs: 44.2% (161/364) with rectal ESBL-producing E. coli and Klebsiella spp.; 30.8% (40/130) with endotracheal carbapenemase-producing Acinetobacter spp.; and 14.3% (52/364) with nasal methicillin-resistant S. aureus. The incidence rate of ICU patients becoming colonized with AROs was 9.8 (223/2,276) per 100 patient days. Significant risk factor for AROs colonization was the Charlson Comorbidity Index score. The proportion of ICU patients with HAIs was 23.4% (85/364), and the incidence rate of ICU patients contracting HAIs was 2.3 (85/3,701) per 100 patient days. The vascular catheterization (central venous, arterial and hemofiltration catheter) was significantly associated with hospital-acquired bloodstream infection. Of the 77 patients who developed ICU-acquired infections with one of the five specified bacteria, 44 (57.1%) had prior colonization with the same organism. Vietnamese ICU patients have a high colonization rate with AROs and a high risk of subsequent infections. Future research should focus on monitoring colonization and the development of preventive measures that may halt spread of AROs in ICU settings.
Most prognostic models for primary sclerosing cholangitis (PSC) are based on patients referred to tertiary care and may not be applicable for the majority of patients with PSC. The aim of this study ...was to construct and externally validate a novel, broadly applicable prognostic model for transplant-free survival in PSC, based on a large, predominantly population-based cohort using readily available variables.
The derivation cohort consisted of 692 patients with PSC from the Netherlands, the validation cohort of 264 patients with PSC from the UK. Retrospectively, clinical and biochemical variables were collected. We derived the prognostic index from a multivariable Cox regression model in which predictors were selected and parameters were estimated using the least absolute shrinkage and selection operator. The composite end point of PSC-related death and liver transplantation was used. To quantify the models' predictive value, we calculated the C-statistic as discrimination index and established its calibration accuracy by comparing predicted curves with Kaplan-Meier estimates.
The final model included the variables: PSC subtype, age at PSC diagnosis, albumin, platelets, aspartate aminotransferase, alkaline phosphatase and bilirubin. The C-statistic was 0.68 (95% CI 0.51 to 0.85). Calibration was satisfactory. The model was robust in the sense that the C-statistic did not change when prediction was based on biochemical variables collected at follow-up.
The Amsterdam-Oxford model for PSC showed adequate performance in estimating PSC-related death and/or liver transplant in a predominantly population-based setting. The transplant-free survival probability can be recalculated when updated biochemical values are available.
Xpert MTB/RIF Ultra (Xpert Ultra) might have higher sensitivity than its predecessor, Xpert MTB/RIF (Xpert), but its role in tuberculous meningitis diagnosis is uncertain. We aimed to compare Xpert ...Ultra with Xpert for the diagnosis of tuberculous meningitis in HIV-uninfected and HIV-infected adults.
In this prospective, randomised, diagnostic accuracy study, adults (≥16 years) with suspected tuberculous meningitis from a single centre in Vietnam were randomly assigned to cerebrospinal fluid testing by either Xpert Ultra or Xpert at baseline and, if treated for tuberculous meningitis, after 3–4 weeks of treatment. Test performance (sensitivity, specificity, and positive and negative predictive values) was calculated for Xpert Ultra and Xpert and compared against clinical and mycobacterial culture reference standards. Analyses were done for all patients and by HIV status.
Between Oct 16, 2017, and Feb 10, 2019, 205 patients were randomly assigned to Xpert Ultra (n=103) or Xpert (n=102). The sensitivities of Xpert Ultra and Xpert for tuberculous meningitis diagnosis against a reference standard of definite, probable, and possible tuberculous meningitis were 47·2% (95% CI 34·4–60·3; 25 of 53 patients) for Xpert Ultra and 39·6% (27·6–53·1; 21 of 53) for Xpert (p=0·56); specificities were 100·0% (95% CI 92·0–100·0; 44 of 44) and 100·0% (92·6–100·0; 48 of 48), respectively. In HIV-negative patients, the sensitivity of Xpert Ultra was 38·9% (24·8–55·1; 14 of 36) versus 22·9% (12·1–39·0; eight of 35) by Xpert (p=0·23). In HIV co-infected patients, the sensitivities were 64·3% (38·8–83·7; nine of 14) for Xpert Ultra and 76·9% (49·7–91·8; ten of 13) for Xpert (p=0·77). Negative predictive values were 61·1% (49·6–71·5) for Xpert Ultra and 60·0% (49·0–70·0) for Xpert. Against a reference standard of mycobacterial culture, sensitivities were 90·9% (72·2–97·5; 20 of 22 patients) for Xpert Ultra and 81·8% (61·5–92·7; 18 of 22) for Xpert (p=0·66); specificities were 93·9% (85·4–97·6; 62 of 66) and 96·9% (89·5–91·2; 63 of 65), respectively. Six (22%) of 27 patients had a positive test by Xpert Ultra after 4 weeks of treatment versus two (9%) of 22 patients by Xpert.
Xpert Ultra was not statistically superior to Xpert for the diagnosis of tuberculous meningitis in HIV-uninfected and HIV-infected adults. A negative Xpert Ultra or Xpert test does not rule out tuberculous meningitis. New diagnostic strategies are urgently required.
Wellcome Trust and the Foundation for Innovative New Diagnostics.