ORCID

http://orcid.org/0000-0002-5781-0042

Date of Award

Summer 8-15-2021

Author's School

Graduate School of Arts and Sciences

Author's Department

Psychology

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Recent advances in machine learning have allowed for the use of natural language responses to predict outcomes of interest to memory researchers such as the confidence with which recognition decisions are made. The present experiments were designed to leverage this novel methodological approach by soliciting free-response justifications of judgments of learning (JOLs) whereby people not only assess the probability with which they will later recognize individual items but also (for some items) justify the reasoning behind their judgment. Across all experiments and conditions, regression models trained on justification language showed above-chance prediction of subsequent memory success and outperformed models trained on numeric JOLs alone. Conditions that improved predictive accuracy of scale JOLs also improved the accuracy of language models. Further, the predictors (word choices) retained by the regularized models provide insight into the mechanisms underlying differences in metamemory performance.

Language

English (en)

Chair and Committee

Kathleen McDermott

Committee Members

Ian Dobbins, David Balota, Mark McDaniel, Justin Kantner,

Share

COinS