Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

Geometry-Aware Normalizing Wasserstein Flows for Optimal Causal Inference

Authors: Kaiwen Hou,

Presenting Author: Kaiwen Hou*

Introduction:
We introduce a novel approach for causal inference that integrates continuous normalizing flows (CNFs) with Wasserstein gradient flows. This method improves upon traditional TMLE by incorporating geometric awareness in model navigation, focusing on minimizing the Cramér-Rao bound from $p_0$ and $p_1$. Our approach combines the versatility of CNFs with the rich geometric structure of the 2-Wasserstein metric, enhancing both the flexibility and accuracy of causal effect estimates.
Theory and Methods:
The heart of our approach is the transformation of simple base distributions into complex targets using CNFs. We propose normalizing Wasserstein flows that optimize CNF parameters for minimal discrepancy between modeled and target distributions, ensuring invertibility and smoothness. Key to our framework is variance regularization and a generalized formulation focusing on velocity field alignment in the loss function, simplifying computational demands.
Optimal Causal Inference:
Addressing distribution shifts and biases in estimating population parameters, our method is applied to optimal causal inference, with an emphasis on local semiparametric efficiency. By minimizing the efficiency bound across the model manifold trajectory, our approach aims at efficient estimators in finite-sample scenarios. Preliminary experimental results show reduced mean-squared error and variance of efficient influence functions compared to traditional methods like TMLE and AIPW.