Date of Award

Winter 12-15-2019

Author's School

McKelvey School of Engineering

Author's Department

Computer Science & Engineering

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Machine learning models have achieved impressive predictive performance in various applications such as image classification and object recognition. However, understanding how machine learning models make decisions is essential when deploying those models in critical areas such as clinical prediction and market analysis, where prediction accuracy is not the only concern. For example, in the clinical prediction of ICU transfers, in addition to accurate predictions, doctors need to know the contributing factors that triggered the alert, which factors can be quickly altered to prevent the ICU transfer. While interpretable machine learning has been extensively studied for years, challenges remain as among all the advanced machine learning classifiers, few of them try to address both of those needs. In this dissertation, we point out the imperative properties of interpretable machine learning, especially for clinical decision support and explore three related directions. First, we propose a post-analysis method to extract actionable knowledge from random forest and additive tree models. Then, we equip the logistic regression model with nonlinear separability while preserving its interpretability. Last but not least, we propose an interpretable factored generalized additive model that allows feature interactions to further increase the prediction accuracy. In the end, we propose a deep learning framework for 30-day mortality prediction, that can handle heterogeneous data types.

Language

English (en)

Chair

Yixin Chen

Committee Members

Joanna Abraham, Roger Chamberlain, Brendan Juba, Thomas Kannampallil,

Comments

Permanent URL: https://doi.org/10.7936/s0a9-nb94

Share

COinS