This item is under embargo and not available online per the author's request. For access information, please visit http://libanswers.wustl.edu/faq/5640.

ORCID

http://orcid.org/0000-0001-7322-1267

Date of Award

Spring 5-15-2021

Author's School

Graduate School of Arts and Sciences

Author's Department

Psychology

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Considerable work in the past decade has focused on representational accounts of how semantic information is acquired and organized, leading to the advent of modern Distributional Semantic Models (DSMs) that learn word meanings by extracting statistical information from large text corpora. However, mechanistic accounts for how meaning-related information is accessed and retrieved from semantic representations to ultimately produce responses within semantic tasks remain relatively understudied, especially for production-based tasks that require the selection of a single response amongst several activated competitors, such as in free association and sentence completion tasks. This dissertation evaluated the extent to which state-of-the-art DSMs combined with algorithmic and process models account for performance in two familiarity-driven tasks (relatedness and similarity judgments) and two production-based tasks (free association and sentence completion). Model comparisons revealed that while a process-based model based on the spreading activation mechanism successfully accounted for relatedness and similarity judgments, an interactive model based on word frequency and semantic similarity, combined with a thresholding function that incorporated competition from neighboring words best accounted for free association responses and response latencies. In addition, the results indicated that when participants produced multiple responses in the free association task, the second response was highly dependent upon the first response, instead of primarily being driven by the cue. In predicting Cloze sentence completion performance, a contextual “attention”-based DSM significantly outperformed other models, suggesting that information is accessed and retrieved in a syntactically constrained manner in language production tasks. Collectively, these findings shed light on how meaning-related information is activated and responses are differentially produced depending upon task demands. Importantly, there appears to be little evidence for a task-independent model of semantic memory representation, indicating the importance of incorporating both task-specific retrieval mechanisms and different representational formats in theories of semantic memory structure and processing. Abandoning a common semantic representation for models of knowledge-driven tasks is a major departure from previous approaches.

Language

English (en)

Chair and Committee

David A. Balota

Committee Members

Ian G. Dobbins, Jan Duchek, Brett Hyde, Jeffrey M. Zacks,

Available for download on Friday, April 22, 2022

Share

COinS