Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

Augmented linear balancing weights as undersmoothed regressions

Authors: Avi Feller, Betsy Ogburn, Oliver Dukes,

Presenting Author: David Bruns-Smith*

The augmented balancing weights framework, also known as automatic debiased machine learning (AutoDML), is a powerful recent approach for causal machine learning. In this paper, we show that, for a large class of outcome and weighting models, this approach is equivalent to a form of undersmoothed regression. In particular, when both outcome and weighting models are linear in some (possibly infinite) basis, the resulting estimator collapses to a single regularized linear outcome model, where the coefficients interpolate between the original outcome model and unpenalized OLS coefficients. When the weighting model is either ridge or lasso, the implied regularization paths are exactly analogous to ridge or lasso penalties. We then specialize these results for specific choice of outcome and weight models, showing that an earlier result for OLS is a special case and that new insights can be gleaned for (kernel) ridge regression and lasso. Specifically, when both outcome and weighting models are (kernel) ridge, the combined estimator is also a form of ridge regression; when both outcome and weighting models are lasso, the included covariates in the combined estimator are the union of the included covariates in the individual models. Finally, we explore the implications for inference and hyperparameter selection in practice.