Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

Policy learning “without” overlap: Pessimism and generalized empirical Bernstein’s inequality

Authors: Ying Jin, Zhimei Ren, Zhuoran Yang, Zhaorann Wang,

Presenting Author: Zhimei Ren*

We study offline policy learning, which utilizes observations collected a priori to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics are lower bounded. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions.
We propose a new algorithm that optimizes lower confidence bounds (LCBs) — instead of point estimates — of the policy values. The LCBs are constructed using the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our analysis, we develop a new self-normalized type concentration inequality for IPW-type estimators, generalizing the empirical Bernstein’s inequality to unbounded and non-i.i.d. data.