Date of Award
8-12-2024
Degree Name
Doctor of Philosophy (PhD)
Degree Type
Dissertation
Abstract
Artificial intelligence (AI) demonstrates superiority over humans in many applications, such as image processing, speech recognition, and decision-making. AI systems are more efficient and accurate in information processing and computation, making them invaluable in assisting human decision-making across domains like healthcare, finance, and academia. Despite these strengths, real-world applications often involve complex, context-specific judgments and a deep understanding of human values, which current AI systems might not fully grasp. In domains requiring creativity and critical thinking, AI systems heavily rely on existing human knowledge. Humans, however, possess unique strengths such as intuition, which aids decision-making in uncertain or novel situations, and moral reasoning, which transcends mere efficiency or optimization. Therefore, combining the strengths of both humans and AI systems is essential for effective decision-making in real-world applications. While AI systems offer significant advantages in supporting human decision-making, their incorporation poses considerable challenges. Human decision-making is complex and characterized by numerous unique traits. Empirical studies reveal that humans can be irrational, unconscious, and sometimes unpredictable, exhibiting various biases such as framing effects and confirmation bias. To develop AI systems that can interact effectively with humans, it is crucial to consider these human traits. However, the integration of human decision models into AI systems has not been sufficiently explored in the field of human-AI interaction. Understanding and predicting human behavior and beliefs can enhance AI's ability to support and influence human decision-making, leading to more effective and human-aligned outcomes. By incorporating human decision models, AI systems can provide more personalized and context-aware assistance, improving both decision quality and human satisfaction. In this dissertation, we address the characteristics of human decision-makers and explicitly incorporate human models into algorithm design. Our study aims to model human behavior and beliefs about AI systems in both one-shot and sequential decision-making scenarios. By leveraging existing research in psychology and economics, as well as data-driven methods using data collected from real humans, we develop models that predict actions more accurately than those based on the assumption of human rationality. We also construct belief models to describe how humans adjust their actions in response to other players within the same decision-making environment. Utilizing these human models, we explore strategies to assist or influence human decisions by designing information signals, modifying decision-making environments, and developing AI teammates that operate alongside humans. Real human-subject experiments are conducted to gain a deeper understanding of human behavior in specific applications and to validate that our designed AI systems can effectively interact with human decision-makers, guiding their decisions toward predefined goals.
Language
English (en)
Chair
Chien-Ju Ho