Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

Robust and Adaptive Causal Null Hypothesis Tests Under Model Uncertainty

Authors: Junhui Yang, He Bai, Rohit Bhattacharya, Ted Westling,

Presenting Author: Junhui Yang*

In observational studies, analysts may posit multiple plausible causal models for a single dataset. As these models typically rely on untestable assumptions, it is often unknown which (if any) of them are correctly specified. Yang et al. [2023] introduced a method of testing a common causal null hypothesis in such settings by combining semiparametric theory with ideas from evidence factors. Their test is asymptotically valid if at least one of the proposed causal models is correct, and exact if exactly one of them is correct. However, the test is conservative when more than one model is valid, resulting in reduced power for small, but non-zero, effects, which may be the case in many real-world applications. In this talk, we propose a test that adapts to the number of correctly specified causal models. We do this by proposing a consistent estimator of the number of identified functionals that are zero under the null and adapting the downstream test accordingly. Under regularity conditions, our test remains asymptotically exact even when more than one causal model is correctly specified. Our adaptive test demonstrates significantly improved power over existing methods. In simulations, its performance is similar to an oracle that already knows the number of correctly specified models. Our adaptive procedures thus enhance power while preserving robustness to causal model misspecification, offering a flexible and practical solution for hypothesis testing under model uncertainty.