Date of Award

8-7-2024

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

In recent years, the rapid advancement of artificial intelligence (AI) technology has sparked what has been coined by mass media as a global AI race. At the heart of this race lies the increasing integration of AI technology into our daily lives as well as in analytical workflows and high-risk decision-making processes. In visual analytics, the goal is to unify the strengths of both the human and the machine through an interactive visual interface to solve complex and overwhelming tasks together. To formulate a fruitful collaboration, researchers in the visual analytics community have proposed various intelligent methods to better understand the users and assist them during their analysis in real-time. As these visual analytic tools powered by AI become more prevalent, it is critical to understand the users' behaviors when interacting with the machine teammate and to what extent they allow its suggestions to influence their data exploration and analytical decisions. Understanding the underlying factors that influence the user's interactions is essential to ensure that the AI guidance is used effectively and will maximize the cost-benefit ratio of creating such tools. This dissertation investigates how to build a reliable and collaborative human-machine team in visual analytics. It examines the interplay between human reasoning and AI inferences in data exploration tasks and interrogates the mediating role of trust. We work towards creating such a collaboration by exploring: 1) how the space of existing user modeling techniques can be applied in a human-machine teaming context, 2) how task difficulty and various explanation levels from the machine teammate affect the user's trust, interactions with the visual analytic tool, and throughput in a visual data foraging scenario, and 3) how to design and develop a visual analytical tool catered to intelligence analysts to support the analysis of automatic speech-to-text outputs that will foster trust in the machine and ultimately allow for effective human-machine teaming whilst also preventing the risk of misinterpretation of critical information.

Language

English (en)

Chair

Chien-Ju Ho

Available for download on Thursday, September 18, 2025

Included in

Physiology Commons

Share

COinS