Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

A Unifying Weighting Perspective on Causal Machine Learning: Kernel Methods, Gaussian Processes, and Bayesian Tree Models

Authors: Jared Murray, Avi Feller,

Presenting Author: Jared Murray*

Causal machine learning methods are powerful tools for estimating heterogeneous treatment effects but are often opaque and difficult to assess in practice. However, many methods –including outcome models with or without augmentation – produce estimates that are linear in the observed outcomes, making them weighting estimators. These model-implied weights can be interpreted as estimates of the Riesz representer for the target estimand, or, equivalently, as balancing weights.

In this paper we derive the implied weights of kernel ridge regression and many Bayesian nonparametric regression models (via their representation as conditional Gaussian process regressions); examples include BART, Bayesian causal forests, and Bayesian neural networks, as well as generic Gaussian process models. We then present a common weighting framework that connects these models to kernel methods like the R-learner and kernel mean matching. We use this framework to present new guidance for specifying semiparametric outcome models (or, more generally, choosing or learning kernels). In particular, we show that under mild conditions the Robinson regression-on-residuals parameterization of an outcome model produces implied weights that approximately balance broad classes of functions between treated and control groups for the average treatment effect in any target population. We therefore connect the desirable properties of Robinson transform and its corresponding Neyman orthogonal score to the balancin