ORCID

orcid.org/0000-0002-9407-710X

Date of Award

Summer 8-17-2017

Author's School

School of Engineering & Applied Science

Author's Department

Biomedical Engineering

Degree Name

Master of Science (MS)

Degree Type

Thesis

Abstract

Traditional speech-in-noise hearing tests are performed by clinicians with specialized equipment. Furthermore, these tasks often present contextually weak sentences in background babble, which are poor representations of real-world situations. This study proposes a mobile audiometric task, Semantic Auditory Search, which uses the Android platform to bypass the need for specialized equipment and presents multiple tasks of two competing real-world conversations to estimate the user’s speech-in-noise hearing ability. Through linear regression models built from data of seventy-nine subjects, three Semantic Auditory Search metrics have been shown to have statistically significant (p < 0.05) with medium effects sizes for predicting QuickSIN SNR50. The internal consistency of the task was also high, with a Cronbach’s alpha of 0.88 or more across multiple metrics. In conclusion, this preliminary study suggests that Semantic Auditory Search can accurately and reliably perform as an automated speech-in-noise hearing test. It also has tremendous potential for extension into automated tests of cognitive function, as well.

Language

English (en)

Chair

Dennis Barbour

Committee Members

Barani Raman Jonathan Peelle