Increasingly, the statistical and epidemiologic literature is focusing beyond issues of internal validity and turning its attention to questions of external validity. Here, we discuss some of the ...challenges of transporting a causal effect from a randomized trial to a specific target population. We present an inverse odds weighting approach that can easily operationalize transportability. We derive these weights in closed form and illustrate their use with a simple numerical example. We discuss how the conditions required for the identification of internally valid causal effects are translated to apply to the identification of externally valid causal effects. Estimating effects in target populations is an important goal, especially for policy or clinical decisions. Researchers and policy-makers should therefore consider use of statistical techniques such as inverse odds of sampling weights, which under careful assumptions can transport effect estimates from study samples to target populations.
Competing events can preclude the event of interest from occurring in epidemiologic data and can be analyzed by using extensions of survival analysis methods. In this paper, the authors outline 3 ...regression approaches for estimating 2 key quantities in competing risks analysis: the cause-specific relative hazard (csRH) and the subdistribution relative hazard (sdRH). They compare and contrast the structure of the risk sets and the interpretation of parameters obtained with these methods. They also demonstrate the use of these methods with data from the Women's Interagency HIV Study established in 1993, treating time to initiation of highly active antiretroviral therapy or to clinical disease progression as competing events. In our example, women with an injection drug use history were less likely than those without a history of injection drug use to initiate therapy prior to progression to acquired immunodeficiency syndrome or death by both measures of association (csRH = 0.67, 95% confidence interval: 0.57, 0.80 and sdRH = 0.60, 95% confidence interval: 0.50, 0.71). Moreover, the relative hazards for disease progression prior to treatment were elevated (csRH = 1.71, 95% confidence interval: 1.37, 2.13 and sdRH = 2.01, 95% confidence interval: 1.62, 2.51). Methods for competing risks should be used by epidemiologists, with the choice of method guided by the scientific question.
An introduction to g methods Naimi, Ashley I; Cole, Stephen R; Kennedy, Edward H
International journal of epidemiology,
04/2017, Volume:
46, Issue:
2
Journal Article
Peer reviewed
Open access
Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do ...standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications.
Selection bias remains a subject of controversy. Existing definitions of selection bias are ambiguous. To improve communication and the conduct of epidemiologic research focused on estimating causal ...effects, we propose to unify the various existing definitions of selection bias in the literature by considering any bias away from the true causal effect in the referent population (the population before the selection process), due to selecting the sample from the referent population, as selection bias. Given this unified definition, selection bias can be further categorized into two broad types: type 1 selection bias owing to restricting to one or more level(s) of a collider (or a descendant of a collider) and type 2 selection bias owing to restricting to one or more level(s) of an effect measure modifier. To aid in explaining these two types-which can co-occur-we start by reviewing the concepts of the target population, the study sample, and the analytic sample. Then, we illustrate both types of selection bias using causal diagrams. In addition, we explore the differences between these two types of selection bias, and describe methods to minimize selection bias. Finally, we use an example of "M-bias" to demonstrate the advantage of classifying selection bias into these two types.
Selection bias due to loss to follow up represents a threat to the internal validity of estimates derived from cohort studies. Over the past 15 years, stratification-based techniques as well as ...methods such as inverse probability-of-censoring weighted estimation have been more prominently discussed and offered as a means to correct for selection bias. However, unlike correcting for confounding bias using inverse weighting, uptake of inverse probability-of-censoring weighted estimation as well as competing methods has been limited in the applied epidemiologic literature. To motivate greater use of inverse probability-of-censoring weighted estimation and competing methods, we use causal diagrams to describe the sources of selection bias in cohort studies employing a time-to-event framework when the quantity of interest is an absolute measure (e.g., absolute risk, survival function) or relative effect measure (e.g., risk difference, risk ratio). We highlight that whether a given estimate obtained from standard methods is potentially subject to selection bias depends on the causal diagram and the measure. We first broadly describe inverse probability-of-censoring weighted estimation and then give a simple example to demonstrate in detail how inverse probability-of-censoring weighted estimation mitigates selection bias and describe challenges to estimation. We then modify complex, real-world data from the University of North Carolina Center for AIDS Research HIV clinical cohort study and estimate the absolute and relative change in the occurrence of death with and without inverse probability-of-censoring weighted correction using the modified University of North Carolina data. We provide SAS code to aid with implementation of inverse probability-of-censoring weighted techniques.
Full text
Available for:
BFBNIB, CMK, NMLJ, NUK, PNG, SAZU, UL, UM, UPUK
Abstract
Epidemiologic studies are frequently susceptible to missing information. Omitting observations with missing variables remains a common strategy in epidemiologic studies, yet this simple ...approach can often severely bias parameter estimates of interest if the values are not missing completely at random. Even when missingness is completely random, complete-case analysis can reduce the efficiency of estimated parameters, because large amounts of available data are simply tossed out with the incomplete observations. Alternative methods for mitigating the influence of missing information, such as multiple imputation, are becoming an increasing popular strategy in order to retain all available information, reduce potential bias, and improve efficiency in parameter estimation. In this paper, we describe the theoretical underpinnings of multiple imputation, and we illustrate application of this method as part of a collaborative challenge to assess the performance of various techniques for dealing with missing data (Am J Epidemiol. 2018;187(3):568–575). We detail the steps necessary to perform multiple imputation on a subset of data from the Collaborative Perinatal Project (1959–1974), where the goal is to estimate the odds of spontaneous abortion associated with smoking during pregnancy.
Positivity, or the experimental treatment assignment assumption, requires that there be both exposed and unexposed participants at every combination of the values of the observed confounders in the ...population under study. Positivity is essential for inference but is often overlooked in practice by epidemiologists. This issue of the Journal includes 2 articles featuring discussions related to positivity. Here the authors define positivity, distinguish between deterministic and random positivity, and discuss the 2 relevant papers in this issue. In addition, the commentators illustrate positivity in simple 2 x 2 tables, as well as detail some ways in which epidemiologists may examine their data for nonpositivity and deal with violations of positivity in practice.
Properly planned and conducted randomized clinical trials remain susceptible to a lack of external validity. The authors illustrate a model-based method to standardize observed trial results to a ...specified target population using a seminal human immunodeficiency virus (HIV) treatment trial, and they provide Monte Carlo simulation evidence supporting the method. The example trial enrolled 1,156 HIV-infected adult men and women in the United States in 1996, randomly assigned 577 to a highly active antiretroviral therapy and 579 to a largely ineffective combination therapy, and followed participants for 52 weeks. The target population was US people infected with HIV in 2006, as estimated by the Centers for Disease Control and Prevention. Results from the trial apply, albeit muted by 12%, to the target population, under the assumption that the authors have measured and correctly modeled the determinants of selection that reflect heterogeneity in the treatment effect. In simulations with a heterogeneous treatment effect, a conventional intent-to-treat estimate was biased with poor confidence limit coverage, but the proposed estimate was largely unbiased with appropriate confidence limit coverage. The proposed method standardizes observed trial results to a specified target population and thereby provides information regarding the generalizability of trial results.
Contrary to the approach of Reitsma et al., our approach does not require and to be large and ; such that the variances of the estimated logit transformed Se and Sp for each study can be approximated ...by and . ...the generalized linear mixed model approach does not require an ad hoc continuity correction when the number of true positives, true negatives, false positives, or false negatives is zero in a study. The correlation between sensitivity and specificity is estimated as suggesting moderate negative correlation between Se and Sp, with 95% confidence interval of -0.78, -0.05 by assuming normality on Fisher's z transformation of that is, normality on , and using the delta method in SAS NLMIXED to compute the variance of Fisher's z. Simulation studies were conducted to evaluate the performance of the generalized vs. general linear random effects models for bivariate analysis of Se and Sp.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UL, UM, UPCLJ, UPUK