Abstract

Traditional speech-in-noise hearing tests are performed by clinicians with specialized equipment. Furthermore, these tasks often present contextually weak sentences in background babble, which are poor representations of real-world situations. This study proposes a mobile audiometric task, Semantic Auditory Search, which uses the Android platform to bypass the need for specialized equipment and presents multiple tasks of two competing real-world conversations to estimate the user’s speech-in-noise hearing ability. Through linear regression models built from data of seventy-nine subjects, three Semantic Auditory Search metrics have been shown to have statistically significant (p < 0.05) with medium effects sizes for predicting QuickSIN SNR50. The internal consistency of the task was also high, with a Cronbach’s alpha of 0.88 or more across multiple metrics. In conclusion, this preliminary study suggests that Semantic Auditory Search can accurately and reliably perform as an automated speech-in-noise hearing test. It also has tremendous potential for extension into automated tests of cognitive function, as well.

Committee Chair

Dennis Barbour

Committee Members

Barani Raman Jonathan Peelle

Comments

Permanent URL: https://doi.org/10.7936/K7TD9WRV

Degree

Master of Science (MS)

Author's Department

Biomedical Engineering

Author's School

McKelvey School of Engineering

Document Type

Thesis

Date of Award

Summer 8-17-2017

Language

English (en)

Author's ORCID

orcid.org/0000-0002-9407-710X

Share

COinS