This item is under embargo and not available online per the author's request. For access information, please visit http://libanswers.wustl.edu/faq/5640.
Date of Award
Doctor of Philosophy (PhD)
Recent advances in machine learning have allowed for the use of natural language responses to predict outcomes of interest to memory researchers such as the confidence with which recognition decisions are made. The present experiments were designed to leverage this novel methodological approach by soliciting free-response justifications of judgments of learning (JOLs) whereby people not only assess the probability with which they will later recognize individual items but also (for some items) justify the reasoning behind their judgment. Across all experiments and conditions, regression models trained on justification language showed above-chance prediction of subsequent memory success and outperformed models trained on numeric JOLs alone. Conditions that improved predictive accuracy of scale JOLs also improved the accuracy of language models. Further, the predictors (word choices) retained by the regularized models provide insight into the mechanisms underlying differences in metamemory performance.
Chair and Committee
Ian Dobbins, David Balota, Mark McDaniel, Justin Kantner,
Anderson, Nathan Lloyd, "The Use of Introspective Reports to Predict Subsequent Memory: Implementing Machine Learning for Judgment-of-Learning Paradigms" (2021). Arts & Sciences Electronic Theses and Dissertations. 2479.
Available for download on Wednesday, February 16, 2022