Primary Submission Category: Difference in Differences, Synthetic Control, Methods for Panel and Longitudinal Data
Sensitivity Analysis for Difference-in-Differences and Related Designs
Authors: Thomas Leavitt,
Presenting Author: Thomas Leavitt*
Applied researchers increasingly acknowledge that Difference-in-Difference’s parallel trends assumption is unlikely to hold exactly. Hence, researchers increasingly assess the sensitivity of causal conclusions to violations of parallel trends given observable pre-treatment differences in trends. In this paper, I propose a new sensitivity analysis for the broader class of controlled pre-post designs, which nests Difference-in-Differences as a special case. To do so, I first derive a general identification framework that unifies controlled pre-post designs. In this framework, one uses models to predict untreated outcomes and then corrects the treated group’s predictions using the comparison group’s observable prediction errors. The point identification assumption analogous to parallel trends is that treated and comparison groups would have equal prediction errors (in expectation) under no treatment. Within this framework, I then formally ground a sensitivity analysis in the logic of multiple robustness, wherein the sensitivity (or, conversely, robustness) of a causal conclusion depends on the number of independent conditions under which it would hold. I argue that this sensitivity analysis improves upon existing alternatives: Using several real-world applications, I show that existing analyses show low sensitivity when observable data suggest it ought to be high and high sensitivity when observable data suggest it ought to be low.