Skip to content

Abstract Search

Primary Submission Category: Bayesian Causal Inference

Bayesian Causal Models from a Weighting Perspective: Balance, Bias, and Double Robustness

Authors: Jared Murray, Avi Feller,

Presenting Author: Jared Murray*

Many popular Bayesian regression models for causal inference produce estimates that are linear in outcomes and can be cast as weighting estimators. Examples include Bayesian Additive Regression Trees (BART), Bayesian Causal Forests (BCF), and Gaussian process regression (including regression with conditionally Gaussian priors). We consider estimating average effects in generic target populations under ignorable selection into treatment.

We show that the model-implied weights have a dual interpretation as regularized minimax balancing weights or, equivalently, estimates of a Riesz representer of the estimand (e.g., inverse propensity weights when estimating the PATE). We derive the bias of the posterior mean when the outcome model is partially correct, describe how to minimize it via model and prior specification, and examine the role of the implied estimate of the Riesz representer under misspecification.

Finally, we construct doubly robust estimates via Bayesian decision theory using a new loss function that prioritizes bias minimization. Minimizing the posterior expected loss leads to doubly robust augmented weighting estimators without modifying the prior or likelihood. These Bayes estimates belong to the family of “automatically debiased” machine learning estimates of causal effects, with tuning parameters that can be informed by data through Bayesian updating. We conclude by discussing the construction of Frequentist and (quasi-)Bayesian uncertainty intervals.