Objective
To define confounding bias in difference‐in‐difference studies and compare regression‐ and matching‐based estimators designed to correct bias due to observed confounders.
Data sources
We ...simulated data from linear models that incorporated different confounding relationships: time‐invariant covariates with a time‐varying effect on the outcome, time‐varying covariates with a constant effect on the outcome, and time‐varying covariates with a time‐varying effect on the outcome. We considered a simple setting that is common in the applied literature: treatment is introduced at a single time point and there is no unobserved treatment effect heterogeneity.
Study design
We compared the bias and root mean squared error of treatment effect estimates from six model specifications, including simple linear regression models and matching techniques.
Data collection
Simulation code is provided for replication.
Principal findings
Confounders in difference‐in‐differences are covariates that change differently over time in the treated and comparison group or have a time‐varying effect on the outcome. When such a confounding variable is measured, appropriately adjusting for this confounder (ie, including the confounder in a regression model that is consistent with the causal model) can provide unbiased estimates with optimal SE. However, when a time‐varying confounder is affected by treatment, recovering an unbiased causal effect using difference‐in‐differences is difficult.
Conclusions
Confounding in difference‐in‐differences is more complicated than in cross‐sectional settings, from which techniques and intuition to address observed confounding cannot be imported wholesale. Instead, analysts should begin by postulating a causal model that relates covariates, both time‐varying and those with time‐varying effects on the outcome, to treatment. This causal model will then guide the specification of an appropriate analytical model (eg, using regression or matching) that can produce unbiased treatment effect estimates. We emphasize the importance of thoughtful incorporation of covariates to address confounding bias in difference‐in‐difference studies.
Objective
To demonstrate regression to the mean bias introduced by matching on preperiod variables in difference‐in‐differences studies.
Data Sources
Simulated data.
Study Design
We performed a Monte ...Carlo simulation to estimate the effect of a placebo intervention on simulated longitudinal data for units in treatment and control groups using unmatched and matched difference‐in‐differences analyses. We varied the preperiod level and trend differences between the treatment and control groups, and the serial correlation of the matching variables. We assessed estimator bias as the mean absolute deviation of estimated program effects from the true value of zero.
Principal Findings
When preperiod outcome level is correlated with treatment assignment, an unmatched analysis is unbiased, but matching units on preperiod outcome levels produces biased estimates. The bias increases with greater preperiod level differences and weaker serial correlation in the outcome. This problem extends to matching on preperiod level of a time‐varying covariate. When treatment assignment is correlated with preperiod trend only, the unmatched analysis is biased, and matching units on preperiod level or trend does not introduce additional bias.
Conclusions
Researchers should be aware of the threat of regression to the mean when constructing matched samples for difference‐in‐differences. We provide guidance on when to incorporate matching in this study design.
Insurance transitions-sometimes referred to as "churn"-before and after childbirth can adversely affect the continuity and quality of care. Yet little is known about coverage patterns and changes for ...women giving birth in the United States. Using nationally representative survey data for the period 2005-13, we found high rates of insurance transitions before and after delivery. Half of women who were uninsured nine months before delivery had acquired Medicaid or CHIP coverage by the month of delivery, but 55 percent of women with that coverage at delivery experienced a coverage gap in the ensuing six months. Risk factors associated with insurance loss after delivery include not speaking English at home, being unmarried, having Medicaid or CHIP coverage at delivery, living in the South, and having a family income of 100-185 percent of the poverty level. To minimize the adverse effects of coverage disruptions, states should consider policies that promote the continuity of coverage for childbearing women, particularly those with pregnancy-related Medicaid eligibility.
In the Medicare Shared Savings Program (MSSP), accountable care organizations (ACOs) have financial incentives to lower spending and improve quality. We used quasi-experimental methods to assess the ...early performance of MSSP ACOs.
Using Medicare claims from 2009 through 2013 and a difference-in-differences design, we compared changes in spending and in performance on quality measures from before the start of ACO contracts to after the start of the contracts between beneficiaries served by the 220 ACOs entering the MSSP in mid-2012 (2012 ACO cohort) or January 2013 (2013 ACO cohort) and those served by non-ACO providers (control group), with adjustment for geographic area and beneficiary characteristics. We analyzed the 2012 and 2013 ACO cohorts separately because entry time could reflect the capacity of an ACO to achieve savings. We compared ACO savings according to organizational structure, baseline spending, and concurrent ACO contracting with commercial insurers.
Adjusted Medicare spending and spending trends were similar in the ACO cohorts and the control group during the precontract period. In 2013, the differential change (i.e., the between-group difference in the change from the precontract period) in total adjusted annual spending was -$144 per beneficiary in the 2012 ACO cohort as compared with the control group (P=0.02), consistent with a 1.4% savings, but only -$3 per beneficiary in the 2013 ACO cohort as compared with the control group (P=0.96). Estimated savings were consistently greater in independent primary care groups than in hospital-integrated groups among 2012 and 2013 MSSP entrants (P=0.005 for interaction). MSSP contracts were associated with improved performance on some quality measures and unchanged performance on others.
The first full year of MSSP contracts was associated with early reductions in Medicare spending among 2012 entrants but not among 2013 entrants. Savings were greater in independent primary care groups than in hospital-integrated groups.
Health care providers who participate as an accountable care organization (ACO) in the voluntary Medicare Shared Savings Program (MSSP) have incentives to lower spending for Medicare patients while ...achieving high performance on a set of quality measures. Little is known about the extent to which early savings achieved by ACOs in the program have grown and been replicated by ACOs that entered the program in later years. ACOs that are physician groups have stronger incentives to lower spending than hospital-integrated ACOs.
Using fee-for-service Medicare claims from 2009 through 2015, we performed difference-in-differences analyses to compare changes in Medicare spending for patients in ACOs before and after entry into the MSSP with concurrent changes in spending for local patients served by providers not participating in the MSSP (control group). We estimated differential changes (i.e., the between-group difference in the change from the pre-entry period) separately for hospital-integrated ACOs and physician-group ACOs that entered the MSSP in 2012, 2013, or 2014.
MSSP participation was associated with differential spending reductions in physician-group ACOs. These reductions grew with longer participation in the program and were significantly greater than the reductions in hospital-integrated ACOs. By 2015, the mean differential change in per-patient Medicare spending was -$474 (-4.9% of the pre-entry mean, P<0.001) for physician-group ACOs that entered in 2012, -$342 (-3.5% of the pre-entry mean, P<0.001) for those that entered in 2013, and -$156 (-1.6% of the pre-entry mean, P=0.009) for those that entered in 2014. The corresponding differential changes for hospital-integrated ACOs were -$169 (P=0.005), -$18 (P=0.78), and $88 (P=0.14), which were significantly lower than for physician-group ACOs (P<0.001). Spending reductions in physician-group ACOs constituted a net savings to Medicare of $256.4 million in 2015, whereas spending reductions in hospital-integrated ACOs were offset by bonus payments.
After 3 years of the MSSP, participation in shared-savings contracts by physician groups was associated with savings for Medicare that grew over the study period, whereas hospital-integrated ACOs did not produce savings (on average) during the same period. (Funded by the National Institute on Aging.).
Objective
To evaluate whether the expansion of Federally Qualified Health Centers (FQHCs) improved late prenatal care initiation, low birth weight, and preterm birth among Medicaid‐covered or ...uninsured individuals.
Data Sources and Study Setting
We identified all FQHCs in California using the Health Resources and Services Administration's Uniform Data System from 2000 to 2019. We used data from the U.S. Census American Community Survey to describe area characteristics. We measured outcomes in California birth certificate data from 2007 to 2019.
Study Design
We compared areas that received their first FQHC between 2011 and 2016 to areas that received it later or that had never had an FQHC. Specifically, we used a synthetic control with a staggered adoption approach to calculate non‐parametric estimates of the average treatment effects on the treated areas. The key outcome variables were the rate of Medicaid or uninsured births with late prenatal care initiation (>3 months' gestation), with low birth weight (<2500 grams), or with preterm birth (<37 weeks' gestation).
Data Collection/Extraction Methods
The analysis was limited to births covered by Medicaid or that were uninsured, as indicated on the birth certificate.
Principal Findings
The 55 areas in California that received their first FQHC in 2011–2016 were more populous; their residents were more likely to be covered by Medicaid, to be low‐income, or to be Hispanic than residents of the 48 areas that did not have an FQHC by the end of the study period. We found no statistically significant impact of the first FQHC on rates of late prenatal care initiation (ATT: −10.4 95% CI −38.1, 15.0), low birth weight (ATT: 0.2 95% CI −7.1, 5.4), or preterm birth (ATT: −7.0 95% CI −15.5, 2.3).
Conclusions
Our results from California suggest that access to primary and prenatal care may not be enough to improve these outcomes. Future work should evaluate the impact of ongoing initiatives to increase access to maternal health care at FQHCs through targeted workforce investments.
Objective
To formalize comparative interrupted time series (CITS) using the potential outcomes framework; compare two version of CITS—a standard linear version and one that adds postperiod ...group‐by‐time parameters—to two versions of difference‐in‐differences (DID)—a standard version with time fixed effects and one that adds group‐specific pretrends; and reanalyze three previously published papers using these models.
Data Sources
Outcome data for reanalyses come from two counties' jail booking and release data, Medicaid prescription drug rebate data from the Centers for Medicare and Medicaid Services (CMS), and acute hepatitis C incidence from the Centers for Disease Control and Prevention.
Study Design
DID and CITS were compared using potential outcomes, and reanalyses were conducted using the four described pre–post study designs.
Data Collection/Extraction Methods
Data from county jails were provided by sheriffs. Data from CMS are publicly available. Data for the third reanalysis were provided by the authors of the original study.
Principal Findings
Though written differently and preferred by different research communities, the general version of CITS and DID with group‐specific pretrends are the same: they yield the same counterfactuals and identify the same treatment effects. In a reanalysis with evidence of divergent preperiod trends, failing to account for this in standard DID led to an 84% smaller effect estimate than the more flexible models. In a second reanalysis with evidence of nonlinear outcome trends, failing to account for this in linear CITS led to a 28% smaller effect estimate than the more flexible models.
Conclusion
We recommend detailing a causal model for treatment selection and outcome generation and the required counterfactuals before choosing an analytical approach. The more flexible versions of DID and CITS can accommodate features often found in real data, namely, nonlinearities and divergent preperiod outcome trends.
Throughout the COVID-19 pandemic, policymakers have proposed risk metrics, such as the CDC Community Levels, to guide local and state decision-making. However, risk metrics have not reliably ...predicted key outcomes and have often lacked transparency in terms of prioritization of false-positive versus false-negative signals. They have also struggled to maintain relevance over time due to slow and infrequent updates addressing new variants and shifts in vaccine- and infection-induced immunity. We make two contributions to address these weaknesses. We first present a framework to evaluate predictive accuracy based on policy targets related to severe disease and mortality, allowing for explicit preferences toward false-negative versus false-positive signals. This approach allows policymakers to optimize metrics for specific preferences and interventions. Second, we propose a method to update risk thresholds in real time. We show that this adaptive approach to designating areas as "high risk" improves performance over static metrics in predicting 3-wk-ahead mortality and intensive care usage at both state and county levels. We also demonstrate that with our approach, using only new hospital admissions to predict 3-wk-ahead mortality and intensive care usage has performed consistently as well as metrics that also include cases and inpatient bed usage. Our results highlight that a key challenge for COVID-19 risk prediction is the changing relationship between indicators and outcomes of policy interest. Adaptive metrics therefore have a unique advantage in a rapidly evolving pandemic context.
There is increasing interest in using price transparency tools to decrease health care spending.
To measure the association between offering a health care price transparency tool and outpatient ...spending.
Two large employers represented in multiple market areas across the United States offered an online health care price transparency tool to their employees. One introduced it on April 1, 2011, and the other on January 1, 2012. The tool provided users information about what they would pay out of pocket for services from different physicians, hospitals, or other clinical sites. Using a matched difference-in-differences design, outpatient spending among employees offered the tool (n=148,655) was compared with that among employees from other companies not offered the tool (n=295,983) in the year before and after it was introduced.
Availability of a price transparency tool.
Annual outpatient spending, outpatient out-of-pocket spending, use rates of the tool.
Mean outpatient spending among employees offered the tool was $2021 in the year before the tool was introduced and $2233 in the year after. In comparison, among controls, mean outpatient spending changed from $1985 to $2138. After adjusting for demographic and health characteristics, being offered the tool was associated with a mean $59 (95% CI, $25-$93) increase in outpatient spending. Mean outpatient out-of-pocket spending among those offered the tool was $507 in the year before introduction of the tool and $555 in the year after. Among the comparison group, mean outpatient out-of-pocket spending changed from $490 to $520. Being offered the price transparency tool was associated with a mean $18 (95% CI, $12-$25) increase in out-of-pocket spending after adjusting for relevant factors. In the first 12 months, 10% of employees who were offered the tool used it at least once.
Among employees at 2 large companies, offering a price transparency tool was not associated with lower health care spending. The tool was used by only a small percentage of eligible employees.