Abstract

This thesis develops a comprehensive framework for fair sequential resource allocation in multi-agent systems where a centralized allocator coordinates actions under global feasibility constraints, while satisfying preferences of different agents. Ranging from ridesharing platforms and homelessness intervention programs to power grid management, such systems play a critical role in shaping access to essential resources. Yet, existing approaches to resource allocation often prioritize aggregate utility, leading to systematic inequities across individuals and groups, particularly in sequential settings where decisions unfold over time. To address this challenge, we introduce the Distributed Evaluation, Centralized Allocation (DECA) framework, which unifies a broad class of real-world allocation problems. DECA separates agent-side evaluation from a central allocator that must satisfy feasibility constraints while also respecting agents' preferences. Building on this framework, we develop methods to (i) detect and quantify temporal inequities through empirical studies and visualization tools, and (ii) design interventions that balance fairness and efficiency with controllable trade-offs. These interventions span post-processing corrections for deployed systems, learning-based in-processing methods that incorporate fairness during training, and data-centric pre-processing approaches that reduce downstream bias. Our contributions include fairness analyses of real-world domains such as ridesharing, homelessness services, and power grid operations, as well as algorithmic methods that operationalize fairness under centralized feasibility constraints. Viewing online data collection as a resource allocation problem, we also develop methods to improve fairness in mobility prediction through equitable online data collection. We further extend fairness audits to contemporary AI systems by detecting biases in reward models used in reinforcement learning from human feedback (RLHF) for large language models, connecting classical fairness concerns to modern AI training pipelines. Together, these frameworks, methodologies, and empirical studies advance the design of AI systems that allocate resources not only efficiently but also equitably. By integrating fairness into sequential resource allocation, this work contributes toward building AI systems that are more accountable, trustworthy, and socially responsible.

Committee Chair

William Yeoh

Committee Members

Alvitta Ottley; Chien-ju Ho; Imanol Ibarra; William Yeoh; Yevgeniy Vorobeychik

Degree

Doctor of Philosophy (PhD)

Author's Department

Computer Science & Engineering

Author's School

McKelvey School of Engineering

Document Type

Dissertation

Date of Award

12-9-2025

Language

English (en)

Share

COinS