Skip to content

Abstract Search

Primary Submission Category: Functional Estimation and Causal Inference

Double Robust Estimation with Split Training Data: Achieving Minimax Optimality with Undersmoothed Local Averaging Linear Smoothers

Authors: Alec McClean, Edward Kennedy, Sivaraman Balakrishnan, Larry Wasserman,

Presenting Author: Alec McClean*

Double robust (DR) estimators have gained popularity in causal inference due to their favorable convergence properties. However, minimax optimal estimators often rely on correcting the bias of the DR estimator through a higher-order von Mises expansion. Instead, minimax optimal DR estimators can be constructed by splitting the training data and estimating undersmoothed nuisance functions on independent samples, which we refer to as the “double robust split training” estimator (DR-ST). In this work, we estimate the expected conditional covariance and average outcome under treatment using DR-ST estimators. We derive an asymptotically linear expansion that holds under a nonparametric stability condition on the nuisance function estimators, and show when linear smoothers satisfy this condition. We examine three DR-ST estimators based on local averaging linear smoothers, assuming throughout that the nuisance functions belong to Holder smoothness classes. We demonstrate that local polynomial regression can achieve semiparametric efficiency under minimal smoothness conditions. We then propose a covariate-density-adapted local polynomial regression that is minimax optimal when the covariate density is known or well-estimated and show that a Normal limit distribution exists even in the low regularity case by deriving a slower-than-root-n central limit theorem. Finally, we present an easy-to-implement DR-ST estimator using 1-Nearest-Neighbors and illustrate its convergence properties.