This item is under embargo and not available online per the author's request. For access information, please visit http://libanswers.wustl.edu/faq/5640.

Title

On Bayesian Regression Regularization Methods

Date of Award

Winter 12-15-2010

Author's School

Graduate School of Arts and Sciences

Author's Department

Mathematics

Degree Name

Doctor of Philosophy (PhD)

Degree Type

Dissertation

Abstract

Regression regularization methods are drawing increasing attention from statisticians for more frequent appearance of high-dimensional problems. Regression regularization achieves simultaneous parameter estimation and variable selection by penalizing the model parameters. In the first part of this thesis, we focus on the elastic net [73], which is a flexible regularization and variable selection method that uses a mixture of L1 and L2 penalties. It is particularly useful when there are much more predictors than the sample size. We proposes a Bayesian method to solve the elastic net model using a Gibbs sampler. While the marginal posterior mode of the regression coefficients is equivalent to estimates given by the non-Bayesian elastic net, the Bayesian elastic net has two major advantages. Firstly, as a Bayesian method, the distributional results on the estimates are straightforward, making the statistical inference easier. Secondly, it chooses the two penalty parameters simultaneously, avoiding the “double shrinkage problem” in the elastic net method. Real data examples and simulation studies show that the Bayesian elastic net behaves comparably in prediction accuracy but performs better in variable selection.

The second part of this thesis investigates Bayesian regularization in quantile regression. Quantile regression is a method that models the relationship between the response variable and covariates through the population quantiles of the response variable. By proposing a hierarchical model framework, we give a generic treatment to a set of regularization approaches, including lasso, elastic net and group lasso. Gibbs samplers are derived for all cases. This is the first work to discuss regularized quantile regression with the elastic net penalty and the group lasso penalty. Both simulated and real data examples show that Bayesian regularized quantile regression methods often outperform quantile regression without regularization and their non-Bayesian counterparts with regularization.

Language

English (en)

Chair and Committee

Nan Lin

Committee Members

Siddhartha Chib, Jimin Ding, Jeff Gill, Stanley Sawyer, Edward Spitznagel

Comments

Permanent URL: https://doi.org/10.7936/K7VM497K

This document is currently not available here.

Share

COinS