ORCID

https://orcid.org/my-orcid?orcid=0009-0007-1394-3410

Date of Award

Spring 5-13-2024

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Master of Science (MS)

Degree Type

Thesis

Abstract

Trust in Large Language Models (LLMs) emerged as a pivotal concern. This is because, despite the transformative potential of LLMs in enhancing the interpretability and interactivity of complex datasets, the opacity of these models and instances of inaccuracies or biases have led to a significant trust deficit among end-users. Moreover, there is a tendency for people to personify AI tools that utilize these LLMs, attributing abilities and sensibilities that they do not truly possess. This thesis exploits this personification and proposes a comprehensive framework of trust repair policies tailored to address the challenges inherent in LLM annotations within data journalism contexts. Grounded in principles of transparency, accountability, user control, feedback integration, and ethical consideration, our research aims to mend the trust breach and foster a more reliable, user-centric approach to AI-assisted data interpretation. Employing a novel experimental design with $84$ participants across diverse demographics, we simulate the dynamics of trust formation, breach, and repair in the context of data visualizations, maps, and other visual journalism from The New York Times Graphics Desk and Washington Post. Our findings reveal that journalists, regardless of data visualization expertise, can identify inaccuracies in AI-generated content. Initial AI accuracy did not significantly influence long-term trust, but journalists with relevant expertise exhibited higher cognitive trust when faced with incorrect summaries. Surprisingly, specific apology strategies had limited impact on trust repair; instead, accuracy and reliability of AI-generated content played a crucial role in maintaining and restoring trust. These findings emphasize the importance of accuracy and transparency in fostering trust between journalists and AI tools, highlighting the need for AI systems that prioritize real-time accuracy. This research contributes to the discourse on the responsible use of AI in data journalism and underscores the significance of collaborative efforts within newsrooms to ensure the integrity of AI-assisted storytelling.

Language

English (en)

Chair

Alvitta Ottley, Computer Science & Engineering

Committee Members

Caitlin Kelleher, Yevgeniy Vorobeychik

Share

COinS