ORCID
http://orcid.org/0000-0003-3265-8766
Date of Award
Spring 5-15-2020
Degree Name
Doctor of Philosophy (PhD)
Degree Type
Dissertation
Abstract
Humans are remarkable in their ability to perform highly complicated behaviors with ease and little conscious thought. Successful speech comprehension, for example, requires the collaboration of multiple sensory, perceptual, and cognitive processes to focus attention on the speaker, disregard competing cues, correctly process incoming audio stimuli, and attach meaning and context to what is heard. Investigating these phenomena can help unravel crucial aspects of human behavior as well as how the brain works in health and disease. However, traditional methods typically involve isolating individual variables and evaluating their decontextualized contribution to an outcome variable of interest. While rigorous and more straightforward to interpret, these reductionist methods forfeit multidimensional inference and waste data resources by collecting identical data in every participant without considering what is the most relevant for any given participant. Methods that can optimize the exact data collected for each participant would be useful for constructing more complex models and for optimizing expensive data collection. Modern tools, such as mobile hardware and large databases, have been implemented to improve upon traditional methods but are still limited in the amount of inference they can provide about an individual. To circumvent these obstacles, a novel machine learning framework capable of quantifying behavioral functions of multiple variables with practical amounts of data has been developed and validated. This framework is capable of linking even loosely related input domains and measuring shared information in one comprehensive assessment. The work described in this thesis first evaluates this framework for active machine learning audiogram (AMLAG) applications. AMLAG customizes the generalized framework to efficiently, accurately, and reliably estimate audiogram functions. Audiograms provide a measure of hearing ability for each ear in the inherently two-dimensional domain of frequency and intensity. Where clinical methods rely on reducing audiogram acquisition to a one-dimensional assessment, AMLAG has been previously verified to provide a continuous, two-dimensional estimate of hearing ability in one ear. Modeling two ears that are physiologically distinct but are defined in the same frequency-intensity input domain, AMLAG was extended to bilateral audiogram acquisition. Left and right ears are traditionally evaluated completely unilaterally. To realize potential gains, AMLAG was generalized from two unilateral tests to a single bilateral test. The active bilateral audiogram allows observations in one ear to simultaneously update the model fit over both ears. This thesis shows that in a cohort of normal-hearing and hearing-impaired listeners, the bilateral audiogram converges to its final estimates significantly faster than sequential active unilateral audiograms. The flexibility of a framework capable of informative individual inference was then evaluated for dynamically masked audiograms. When one ear of an individual can hear significantly better than the other ear, assessing the worse ear with loud probe tones may require delivering masking noise to the better ear in order to prevent the probe tones from inadvertently being heard by the better ear. Current masking protocols are confusing, laborious and time consuming. Adding a standardized masking protocol to the AMLAG procedure alleviates all of these drawbacks by dynamically adapting the masking to an individualĠ³ specific needs. Dynamically masked audiograms are shown to achieve accurate threshold estimates and reduce test time compared to current clinical masking procedures used to evaluate individuals with highly asymmetric hearing, yet can also be used effectively and efficiently for anyone. Finally, the active machine learning framework was evaluated for estimating cognitive and perceptual variables in one joint assessment. Combining a verbal N-back and speech-in-noise assessment, a joint estimator links two disjoint assessments defined by two unique input domains and, for the first time, offers a direct measurement of the interactions between two of the most predictive measures of cognitive decline. Young and older healthy adults were assessed to investigate age-related adaptations in behavior and the inter-subject variability that is often seen in low-dimensional speech and memory tests. The joint cognitive and perceptual test accurately predicted standalone N-back but not speech-in-noise performance. This first implementation did not reveal significant interactions between speech and memory. However, the joint task framework did provide an estimate of participant performance over the entire two-dimensional domain without any experimenter-observed scoring and may better mirror the challenges of real-world tasks. While significant age-related differences were apparent, substantial within group variance led to evaluating joint test performance in predicting individual differences in neural activity. Speech-in-noise tests may activate non-auditory specific networks of the brain as age and task difficulty increase. Some of these regions are domain-general networks that are also active during verbal working memory tests. Functional brain images were collected during an in-scanner speech-in-noise test for a portion of the joint test participants. Individual brain activity at regions of interest in the frontoparietal, cingulo-opercular, and speech networks was correlated to performance on the joint speech and memory test. No significant correlations were found, but the joint estimation of neural, cognitive, and perceptual behaviors through this framework may be possible with further test adaptations. Generally, the lack of significant findings does not detract from the feasibility and utility of a generalized framework that can accurately model complex cognitive, perceptual, and neural processes in individuals. As demonstrated in this thesis, high-dimensional, individual testing procedures facilitate the direct assessment of complicated human behaviors empowering equitable, informative, and effective test methods.
Language
English (en)
Chair and Committee
Dennis Barbour
Committee Members
Todd Braver, Jonathan Peelle, Roman Garnett, Camillo Padoa-Schioppa,
Recommended Citation
Heisey, Katherine, "Joint Estimation of Perceptual, Cognitive, and Neural Processes" (2020). Arts & Sciences Electronic Theses and Dissertations. 2198.
https://openscholarship.wustl.edu/art_sci_etds/2198
Included in
Biomedical Engineering and Bioengineering Commons, Neuroscience and Neurobiology Commons