Skip to content

Abstract Search

Primary Submission Category: Randomized Studies

Do algorithms help humans make better decisions? A framework for experimental evaluation

Authors: Melody Huang, Eli Ben-Michael, D. James Greiner, Kosuke Imai, Zhichao Jiang, Sooahn Shin,

Presenting Author: Sooahn Shin*

The use of data-driven algorithms has become ubiquitous in today’s society. And yet, in many cases, humans still make final decisions especially when stakes are high. The critical question, therefore, is whether or not algorithms help humans make better decisions. We introduce a new methodological framework that can be used to experimentally evaluate the causal impact of algorithmic recommendations on human decisions. We first formalize decision maker’s ability to make correct decisions using standard classification metrics based on potential outcomes. We then consider an experiment, where cases are randomly assigned to human decisions, either with or without algorithmic recommendations. We show how to analyze the data from such an experiment to compare the performance of three alternative decision-making systems–human alone, human with algorithm, and algorithm alone–under a minimal set of assumptions. We develop a sensitivity analysis to assess the degree to which the empirical estimates are robust to potential violations of the underlying assumptions. We apply the proposed methodology to the first randomized control trial of pretrial public safety assessment and examine whether or not the provision of risk assessment scores improve a judge’s decision to impose a cash bail.