Skip to content

Abstract Search

Primary Submission Category: Design of Experiments

Balancing Efficiency and Inference in Adaptive Experiments

Authors: Daniel Molitor, Ian Lundberg,

Presenting Author: Daniel Molitor*

Adaptive experimental designs dramatically improve data efficiency by dynamically learning treatment effects, but they pose significant challenges for statistical inference. Standard estimators such as sample means and inverse propensity weighted estimators often yield biased treatment effect estimates, undermining a key advantage of randomized experiments. Additionally, adaptive experiments allocate samples unevenly, concentrating power on the optimal treatment(s) while leaving sub-optimal treatment arms underpowered—a critical limitation when researchers need reliable inference across all treatment arms.

We propose a framework that builds on existing methods, such as the Mixture Adaptive Design by Liang and Bojinov, which generate unbiased treatment effect estimates with anytime-valid confidence sequences. By dynamically eliminating treatment arms when their confidence sequences exclude zero, our framework harnesses the efficiency gains of adaptive experiments while maintaining valid inference and sufficient power across all treatment arms.

We validate this framework through simulations and a real-world experiment assessing how a criminal record impacts hiring decisions, moderated by resume features. We compare the results of our adaptive experiment to those of a randomized experiment, demonstrating our method’s ability to significantly reduce sample size requirements while maintaining valid inference and ensuring sufficient power across all treatment arms.