Abstract
Traditional speech-in-noise hearing tests are performed by clinicians with specialized equipment. Furthermore, these tasks often present contextually weak sentences in background babble, which are poor representations of real-world situations. This study proposes a mobile audiometric task, Semantic Auditory Search, which uses the Android platform to bypass the need for specialized equipment and presents multiple tasks of two competing real-world conversations to estimate the user’s speech-in-noise hearing ability. Through linear regression models built from data of seventy-nine subjects, three Semantic Auditory Search metrics have been shown to have statistically significant (p < 0.05) with medium effects sizes for predicting QuickSIN SNR50. The internal consistency of the task was also high, with a Cronbach’s alpha of 0.88 or more across multiple metrics. In conclusion, this preliminary study suggests that Semantic Auditory Search can accurately and reliably perform as an automated speech-in-noise hearing test. It also has tremendous potential for extension into automated tests of cognitive function, as well.
Committee Chair
Dennis Barbour
Committee Members
Barani Raman Jonathan Peelle
Degree
Master of Science (MS)
Author's Department
Biomedical Engineering
Document Type
Thesis
Date of Award
Summer 8-17-2017
Language
English (en)
DOI
https://doi.org/10.7936/K7TD9WRV
Author's ORCID
orcid.org/0000-0002-9407-710X
Recommended Citation
Peng, Tommy, "Development and Validation for a Mobile Speech-in-Noise Audiometric Task" (2017). McKelvey School of Engineering Theses & Dissertations. 255.
The definitive version is available at https://doi.org/10.7936/K7TD9WRV
Included in
Other Biomedical Engineering and Bioengineering Commons, Speech and Hearing Science Commons
Comments
Permanent URL: https://doi.org/10.7936/K7TD9WRV