Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

A Model Ensemble Approach to Individual Fairness in Machine Learning

Authors: Bernardo Modenesi, Lucia Wang, Ameya Diwan,

Presenting Author: Bernardo Modenesi*

Individual fairness is based on the principle that similar observations should be treated similarly by a machine learning (ML) model, addressing the limitations of group fairness methods. Despite its intuitive appeal, implementing individual fairness algorithms is challenging due to difficulties in defining a metric for similarity between individuals. In this paper, we develop a model ensemble approach inspired by individual fairness to assess ML model fairness. Leveraging results from the double/causal ML literature and ML clustering techniques, our method requires considerably fewer assumptions than previous individual fairness methods, in addition to being model-agnostic and avoiding cherry-picking decisions in fairness assessment. Our data-driven method involves: (i) removing variation in the dataset related to sensitive attributes using causal ML; (ii) clustering observations using random forests and a Bayesian network algorithm; (iii) performing within-cluster inference to test if the model treats similar observations similarly, and applying multiple hypothesis test correction to aggregate the results. We provide a single statistical p-value for the null hypothesis that the model is unbiased based on individual fairness and create a scale to measure the extent of bias against minorities, enhancing the interpretability of the p-value for decision-makers. We apply our methodology to assess bias in several contexts and provide a Python package for this methodology.