How to be fair? A study of label and selection bias

Seminar

Date: Friday 24 November 2023

Time: 10.00 – 11.00

Location: Room L30, DSV, Nod building, Borgarfjordsgatan 12, Kista

Welcome to a research seminar on biased data, and how to create fair models. Toon Calders, a researcher from Belgium, is the speaker.

The main entrance to DSV, the Nod building, Kista.
The main entrance to DSV, the Nod building, Kista. Photo: Åse Karlén.

On November 24, Toon Calders will visit the Department of Computer and Systems Sciences, DSV. He is a professor of Computer Science at the University of Antwerp, Belgium.

Toon Calders has been invited by the Data Science Research Group and will lead a research seminar at DSV. During the seminar, he will share his latest research results based on a scientific article that has been published in Machine Learning.

Read the article “How to be fair? A study of label and selection bias”

The seminar is organised at DSV in Kista. No registration needed!

Find your way to DSV

Questions? Get in touch with Franco Rugolon

 


About the seminar

It is widely accepted that biased data leads to biased and thus potentially unfair models. Therefore, several measures for bias in data and model predictions have been proposed, as well as bias mitigation techniques whose aim is to learn models that are fair by design.

Despite the myriad of mitigation techniques developed in the past decade, however, it is still poorly understood under what circumstances which methods work. Recently, Wick et al. showed, with experiments on synthetic data, that there exist situations in which bias mitigation techniques lead to more accurate models when measured on unbiased data. Nevertheless, in the absence of a thorough mathematical analysis, it remains unclear which techniques are effective under what circumstances.

We propose to address this problem by establishing relationships between the type of bias and the effectiveness of a mitigation technique, where we categorize the mitigation techniques by the bias measure they optimize. In this paper we illustrate this principle for label and selection bias on the one hand, and demographic parity and ‘‘We’re All Equal’’ on the other hand. Our theoretical analysis allows to explain the results of Wick et al. and we also show that there are situations where minimizing fairness measures does not result in the fairest possible distribution.