Purpose
In studies of effects of time‐varying drug exposures, adequate adjustment for time‐varying covariates is often necessary to properly control for confounding. However, the granularity of the ...available covariate data may not be sufficiently fine, for example when covariates are measured for participants only when their exposure levels change.
Methods
To illustrate the impact of choices regarding the frequency of measuring time‐varying covariates, we simulated data for a large target trial and for large observational studies, varying in covariate measurement design. Covariates were measured never, on a fixed‐interval basis, or each time the exposure level switched. For the analysis, it was assumed that covariates remain constant in periods of no measurement. Cumulative survival probabilities for continuous exposure and non‐exposure were estimated using inverse probability weighting to adjust for time‐varying confounding, with special emphasis on the difference between 5‐year event risks.
Results
With monthly covariate measurements, estimates based on observational data coincided with trial‐based estimates, with 5‐year risk differences being zero. Without measurement of baseline or post‐baseline covariates, this risk difference was estimated to be 49% based on the available observational data. With measurements on a fixed‐interval basis only, 5‐year risk differences deviated from the null, to 29% for 6‐monthly measurements, and with magnitude increasing up to 35% as the interval length increased. Risk difference estimates diverged from the null to as low as −18% when covariates were measured depending on exposure level switching.
Conclusion
Our simulations highlight the need for careful consideration of time‐varying covariates in designing studies on time‐varying exposures. We caution against implementing designs with long intervals between measurements. The maximum length required will depend on the rates at which treatments and covariates change, with higher rates requiring shorter measurement intervals.
Article full texts are often inaccessible via the standard search engines of biomedical literature, such as PubMed and Embase, which are commonly used for systematic reviews. Excluding the full-text ...bodies from a literature search may result in a small or selective subset of articles being included in the review because of the limited information that is available in only title, abstract, and keywords. This article describes a comparison of search strategies based on a systematic literature review of all articles published in 5 top-ranked epidemiology journals between 2000 and 2017.
Based on a text-mining approach, we studied how nine different methodological topics were mentioned across text fields (title, abstract, keywords, and text body). The following methodological topics were studied: propensity score methods, inverse probability weighting, marginal structural modeling, multiple imputation, Kaplan-Meier estimation, number needed to treat, measurement error, randomized controlled trial, and latent class analysis.
In total, 31,641 Hypertext Markup Language (HTML) files were downloaded from the journals’ websites. For all methodological topics and journals, at most 50% of articles with a mention of a topic in the text body also mentioned the topic in the title, abstract, or keywords. For several topics, a gradual decrease over calendar time was observed of reporting in the title, abstract, or keywords.
Literature searches based on title, abstract, and keywords alone may not be sufficiently sensitive for studies of epidemiological research practice. This study also illustrates the potential value of full-text literature searches, provided there is accessibility of full-text bodies for literature searches.
Accurate prediction of response to neoadjuvant chemotherapy (NAC) can help tailor treatment to individual patients' needs. Little is known about the combination of liquid biopsies and computer ...extracted features from multiparametric magnetic resonance imaging (MRI) for the prediction of NAC response in breast cancer. Here, we report on a prospective study with the aim to explore the predictive potential of this combination in adjunct to standard clinical and pathological information before, during and after NAC. The study was performed in four Dutch hospitals. Patients without metastases treated with NAC underwent 3 T multiparametric MRI scans before, during and after NAC. Liquid biopsies were obtained before every chemotherapy cycle and before surgery. Prediction models were developed using penalized linear regression to forecast residual cancer burden after NAC and evaluated for pathologic complete response (pCR) using leave-one-out-cross-validation (LOOCV). Sixty-one patients were included. Twenty-three patients (38%) achieved pCR. Most prediction models yielded the highest estimated LOOCV area under the curve (AUC) at the post-treatment timepoint. A clinical-only model including tumor grade, nodal status and receptor subtype yielded an estimated LOOCV AUC for pCR of 0.76, which increased to 0.82 by incorporating post-treatment radiological MRI assessment (i.e., the "clinical-radiological" model). The estimated LOOCV AUC was 0.84 after incorporation of computer-extracted MRI features, and 0.85 when liquid biopsy information was added instead of the radiological MRI assessment. Adding liquid biopsy information to the clinical-radiological resulted in an estimated LOOCV AUC of 0.86. In conclusion, inclusion of liquid biopsy-derived markers in clinical-radiological prediction models may have potential to improve prediction of pCR after NAC in breast cancer.
Epidemiologic studies often suffer from incomplete data, measurement error (or misclassification), and confounding. Each of these can cause bias and imprecision in estimates of exposure–outcome ...relations. We describe and compare statistical approaches that aim to control all three sources of bias simultaneously.
We illustrate four statistical approaches that address all three sources of bias, namely, multiple imputation for missing data and measurement error, multiple imputation combined with regression calibration, full information maximum likelihood within a structural equation modeling framework, and a Bayesian model. In a simulation study, we assess the performance of the four approaches compared with more commonly used approaches that do not account for measurement error, missing values, or confounding.
The results demonstrate that the four approaches consistently outperform the alternative approaches on all performance metrics (bias, mean squared error, and confidence interval coverage). Even in simulated data of 100 subjects, these approaches perform well.
There can be a large benefit of addressing measurement error, missing values, and confounding to improve the estimation of exposure–outcome relations, even when the available sample size is relatively small.
Case-control designs are an important yet commonly misunderstood tool in the epidemiologist's arsenal for causal inference. We reconsider classical concepts, assumptions and principles and explore ...when the results of case-control studies can be endowed a causal interpretation.
We establish how, and under which conditions, various causal estimands relating to intention-to-treat or per-protocol effects can be identified based on the data that are collected under popular sampling schemes (case-base, survivor, and risk-set sampling, with or without matching). We present a concise summary of our identification results that link the estimands to the (distribution of the) available data and articulate under which conditions these links hold.
The modern epidemiologist's arsenal for causal inference is well-suited to make transparent for case-control designs what assumptions are necessary or sufficient to endow the respective study results with a causal interpretation and, in turn, help resolve or prevent misunderstanding. Our approach may inform future research on different estimands, other variations of the case-control design or settings with additional complexities.
Results of simulation studies evaluating the performance of statistical methods can have a major impact on the way empirical research is implemented. However, so far there is limited evidence of the ...replicability of simulation studies. Eight highly cited statistical simulation studies were selected, and their replicability was assessed by teams of replicators with formal training in quantitative methodology. The teams used information in the original publications to write simulation code with the aim of replicating the results. The primary outcome was to determine the feasibility of replicability based on reported information in the original publications and supplementary materials. Replicasility varied greatly: some original studies provided detailed information leading to almost perfect replication of results, whereas other studies did not provide enough information to implement any of the reported simulations. Factors facilitating replication included availability of code, detailed reporting or visualization of data-generating procedures and methods, and replicator expertise. Replicability of statistical simulation studies was mainly impeded by lack of information and sustainability of information sources. We encourage researchers publishing simulation studies to transparently report all relevant implementation details either in the research paper itself or in easily accessible supplementary material and to make their simulation code publicly available using permanent links.
Negative controls: Concepts and caveats Penning de Vries, Bas BL; Groenwold, Rolf HH
Statistical methods in medical research,
08/2023, Letnik:
32, Številka:
8
Journal Article
Recenzirano
Odprti dostop
Unmeasured confounding is a well-known obstacle in causal inference. In recent years, negative controls have received increasing attention as a important tool to address concerns about the problem. ...The literature on the topic has expanded rapidly and several authors have advocated the more routine use of negative controls in epidemiological practice. In this article, we review concepts and methodologies based on negative controls for detection and correction of unmeasured confounding bias. We argue that negative controls may lack both specificity and sensitivity to detect unmeasured confounding and that proving the null hypothesis of a null negative control association is impossible. We focus our discussion on the control outcome calibration approach, the difference-in-difference approach, and the double-negative control approach as methods for confounding correction. For each of these methods, we highlight their assumptions and illustrate the potential impact of violations thereof. Given the potentially large impact of assumption violations, it may sometimes be desirable to replace strong conditions for exact identification with weaker, easily verifiable conditions, even when these imply at most partial identification of unmeasured confounding. Future research in this area may broaden the applicability of negative controls and in turn make them better suited for routine use in epidemiological practice. At present, however, the applicability of negative controls should be carefully judged on a case-by-case basis.
The use of structural equation models for causal inference from panel data is critiqued in the causal inference literature for unnecessarily relying on a large number of parametric assumptions, and ...alternative methods originating from the potential outcomes framework have been recommended, such as inverse probability weighting (IPW) estimation of marginal structural models (MSMs). To better understand this criticism, we describe three phases of causal research. We explain (differences in) the assumptions that are made throughout these phases for structural equation modeling (SEM) and IPW-MSM approaches using an empirical example. Second, using simulations we compare the finite sample performance of SEM and IPW-MSM for the estimation of time-varying exposure effects on an end-of-study outcome under violations of parametric assumptions. Although increased reliance on parametric assumptions does not always translate to increased bias (even under model misspecification), researchers are still well-advised to acquaint themselves with causal methods from the potential outcomes framework.