Primary Submission Category: Randomized Studies
Covariance Adjustment with Non-Experimental Units in Randomized Studies
Authors: Joshua Wasserman, Ben Hansen,
Presenting Author: Joshua Wasserman*
Researchers estimating intervention effects from randomized experiments may perform covariance adjustment to improve the precision of their estimate. Cohen and Fogarty (2022), Guo and Basse (2021), and Lin (2013) are among the proponents of the procedure, citing that when the intervention is randomly assigned, the OLS estimate of the intervention effect will be no less precise than the difference-in-means estimate. Covariance adjustment may be particularly appealing when a vast collection of auxiliary control units exists. The “rebar” method in Sales, Hansen, and Rowan (2018) leverages unmatched control units, for example. However, potential differences between the experimental and auxiliary units lead Rubin and Thomas (2000) and Ho, Imai, King, and Stuart (2007) to suggest using only matched observations to fit the covariance adjustment model. It is an open question—and indeed depends on the data at hand—whether the gain in precision outweighs the possible bias. In light of this debate, we introduce our methodology and accompanying software for covariance-adjusted intervention effect estimates through an application in the educational setting. Our approach based on stacked estimating equations allows for propagation of error from the covariance adjustment model to the intervention effect model, where the two need not be fit to the same sample. As a result, it more accurately portrays the bias-variance tradeoff in covariance adjustment than preeminent methods and software.