ORCID

https://orcid.org/0000-0001-5310-6499

Date of Award

8-28-2023

Author's School

Graduate School of Arts and Sciences

Author's Department

Psychology

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Successful communication requires that listeners not only identify speech, but do so while maintaining performance on other tasks—like remembering what a conversational partner said or paying attention while driving. Although there is a large body of evidence indicating that cues such as audiovisual speech and semantic context substantially improve speech identification, less is known about how they affect the listener’s ability to perform simultaneous cognitive tasks (i.e., how they affect one aspect of listening effort). This set of six experiments systematically evaluates how cues that robustly benefit speech intelligibility—specifically, audiovisual speech and semantic context—affect dual-task costs. Results were consistent with the claim that seeing the talker reduces dual-task costs in difficult listening conditions (that is, those in which the visual signal substantially benefits speech intelligibility), but has little effect or may even increase dual-task costs when the level of the background noise reduces the influence of visual information on speech identification. This study also shows that semantic context improves listeners’ ability to complete simultaneous tasks, particularly in difficult levels of background noise and in audiovisual conditions. Finally, to facilitate conducting research like this remotely, I developed a novel dual-task paradigm that can be implemented online or in-lab and can accompany audiovisual in addition to audio-only speech. Given the novelty of this task, this study also includes psychometric experiments that establish positive and negative control, provides evidence for the convergent validity and sensitivity of the measure relative to a commonly-used task in the listening effort literature, and generates performance curves for speech identification accuracy as well as response times across a wide range of listening difficulties for both audio-only and audiovisual speech. Thus, in addition to jointly evaluating the effects of audiovisual speech, levels of analysis (words vs. sentences), and semantic contextual cues on dual-task costs, this study enables other researchers to address theoretical questions related to the cognitive mechanisms supporting audiovisual speech processing beyond the specific issues addressed in this paper and without being limited by the necessity to conduct research in person.

Language

English (en)

Chair and Committee

Kristin Van Engen

Available for download on Thursday, August 28, 2025

Share

COinS