Primary Submission Category: Causal Fairness, and Bias/Discrimination
General Methods for Fair Classification by Constraining Path-Specific Causal Effects
Authors: León Segovia, Herbert Susmann, Iván Diaz,
Presenting Author: León Segovia*
Machine learning (ML) for efficient decision making has proliferated across domains, typically in matters concerning resource allocation and policy implementation. It is well documented that these algorithms can codify sociostructural bias present in training data; in fact, perpetuating unequal treatment of minoritized groups. To overcome this issue, researchers must be able to formalize the underlying relationships between discrimination and existing disparities. We use causal mediation analysis —a promising way to characterize fairness in ML—to achieve this goal by creating classification methods to improve fair outcomes. Building on existing work, our methods operationalize fairness as the optimization of the expected loss under constraints of path-specific effects (PSE). This project extends existing methods for constrained risk minimization in two ways: using causal parameters developed in prior work to address problems where constraint paths involve intermediate confounding; as well as where the exposure is non-binary. We implement recently developed causal parameters as alternative definitions of PSE to appropriately account for interventions on information shared between variables; enabling a more rigorous analysis of the complex dynamics between discrimination and disparities mediated by socioeconomic factors. We establish the relevance of our novel methods by applying them to case studies in public health research.