Skip to content

Abstract Search

Primary Submission Category: Machine Learning and Causal Inference

Controlling Label Noise in Deep Classifiers via Causal Regularization

Authors: Juan Castorena,

Presenting Author: Juan Castorena*

The explosion in data availability, computational power and scientific advances have enabled the development of deep learning (DL) methods at the forefront of every major AI advancement of the last decade. In the supervised setting, state of the art DL classifiers rely on training based on a very large dataset of clean, well annotated samples. Unfortunately, this type of effort is impractical for many real-world applications and often practitioners need to accept labeling errors as an inescapable fact. Here, we propose to address the problem of entanglement between the clean and noisy conditional distributions through a more principled approach of analysis under the lens of causality. We encode the data generative process of a variety of label noise source settings via structural causal models and analyze using the rules of the do-calculus and transportability theory the conditions for identifiability, ultimately leading to disentanglement. Empirical evidence on synthetically corrupted benchmark datasets shows that deep classifier models equipped with identifiability constraints imposed here via learning loss regularizations are more robust to out-of-distribution examples while also producing more interpretable features in contrast to their causally-unconstrained counterparts.