Date of Award

Summer 9-13-2023

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

As algorithmic decision-making systems become increasingly entrenched in human-centric domains such as hiring and lending, it is crucial that these systems do not perpetuate historical bias or unfairly discriminate against sensitive demographics. However, in these domains, considering group fairness as the sole factor often leads to two significant consequences: 1) incentivizing individuals to strategically alter their behavior to obtain desired outcomes (e.g., hiding debt to qualify for a loan), and 2) achieving fairness at the expense of individual welfare (e.g., equalizing lending rates between groups by offering fewer total loans). We explore these consequences from both theoretical and empirical perspectives with the goal of characterizing when and why such phenomena occur, as well as developing solutions to mitigate their negative side effects. Our findings suggest that traditional group-fair learning, i.e., optimizing solely for group fairness and predictive performance, can frequently result in both of the aforementioned consequences, implying that an isolated focus on group fairness can lead to increased manipulative behavior and widespread decreases in individual welfare. Notably, the former has the potential to decrease model fairness, suggesting that optimizing for group fairness can be counterproductive (i.e., resulting in less fair models) when the model creates incentives for strategic behavior. In light of the pitfalls of group-fair learning, we propose several approaches to mitigate their adverse effects. From the perspective of strategic behavior, we propose an auditing mechanism that discourages manipulative behavior and promotes true feature changes (i.e., promotes recourse). From the perspective of individual welfare, we develop two learning schemes that preserve individual welfare while achieving high levels of performance and group fairness. In addition to providing theoretical guarantees for both these welfare-aware learning schemes and the auditing mechanism, we also demonstrate their practical efficacy through experiments on a multitude of datasets from several different domains. Our results indicate that by adopting a more nuanced approach to group-fair learning, it is possible to build models that avoid these negative side effects without compromising performance or group fairness.

Language

English (en)

Chair

Yevgeniy Vorobeychik

Committee Members

Sanmay Das

Share

COinS