Prior Elicitation and Variable Selection for Bayesian Quantile

advertisement
Prior Elicitation and Variable
Selection for Bayesian Quantile
Regression
By
Rahim Jabbar Thaher Al-Hamzawi
A thesis submitted for the degree of
Doctor of Philosophy
May 2013
Abstract
Bayesian subset selection suffers from three important difficulties: assigning priors over model
space, assigning priors to all components of the regression coefficients vector given a specific
model and Bayesian computational efficiency (Chen et al., 1999). These difficulties become more
challenging in Bayesian quantile regression framework when one is interested in assigning priors
that depend of different quantile levels. The objective of Bayesian quantile regression (BQR),
which is a newly proposed tool, is to deal with unknown parameters and model uncertainty
in quantile regression (QR). However, Bayesian subset selection in quantile regression
models is usually a difficult issue due to the computational challenges and nonavailability
of conjugate prior distributions that are dependent on the quantile level.
These challenges are rarely addressed via either penalised likelihood function or stochastic search
variable selection (SSVS). These methods typically use symmetric prior distributions for
regression coefficients, such as the Gaussian and Laplace, which may be suitable for median
regression. However, an extreme quantile regression should have different regression coefficients
from the median regression, and thus the priors for quantile regression coefficients should depend
on quantiles.
This thesis focuses on three challenges: assigning standard quantile dependent prior distributions
for the regression coefficients, assigning suitable quantile dependent priors over model space and
achieving computational efficiency. The first of these challenges is studied in Chapter 2 in which
a quantile dependent prior elicitation scheme is developed. In particular, an extension of the
Zellners prior which allows for a conditional conjugate prior and quantile dependent prior on
Bayesian quantile regression is proposed. The prior is generalised in Chapter 3 by introducing a
ridge parameter to address important challenges that may arise in some applications, such as
multicollinearity and overfitting problems. The proposed prior is also used in Chapter 4 for subset
selection of the fixed and random coefficients in a linear mixedeffects QR model. In Chapter 5
we specify normal-exponential prior distributions for the regression coefficients which can
provide adaptive shrinkage and represent an alternative model to the Bayesian Lasso quantile
regression model.
For the second challenge, we assign a quantile dependent prior over model space in Chapter 2.
The prior is based on the percentage bend correlation which depends on the quantile level. This
prior is novel and is used in Bayesian regression for the first time. For the third challenge of
computational efficiency, Gibbs samplers are derived and setup to facilitate the computation of
the proposed methods.
In addition to the three major aforementioned challenges this thesis also addresses other
important issues such as the regularisation in quantile regression and selecting both random and
fixed effects in mixed quantile regression models.
Download