Skip to content

Abstract Search

Primary Submission Category: Generalizability/Transportability

Policy Learning under Biased Sample Selection

Authors: Roshni Sahoo, Lihua Lei, Stefan Wager,

Presenting Author: Roshni Sahoo*

The empirical risk minimization approach to data-driven decision making assumes that the target distribution we want to deploy our decision rule on is the same distribution that our training set is drawn from. However, in many cases, we may be concerned that our training sample is biased, and that some groups (characterized by observable or unobservable covariates) are under- or over-represented relative to the target population, so empirical risk minimization over the training set may fail to yield rules that perform well at deployment. We propose a model of sampling bias called $Gamma$-biased sampling, where observed covariates can affect the probability of sample selection arbitrarily much but the amount of unexplained variation in the probability of sample selection is bounded by a constant factor. Applying the distributionally robust optimization framework, we propose a method for learning a decision rule that minimizes the worst-case risk incurred under the family of distributions that can generate the training distribution under $Gamma$-biased sampling. We apply a result of Rockafellar and Uryasev to show that this problem is equivalent to an augmented convex risk minimization problem. We give statistical guarantees for learning a robust model using the method of sieves and propose a deep learning algorithm whose loss function captures our robustness target. We empirically validate our proposed method in simulations and a case study on ICU length of stay prediction.