2019 May 10, Tavis Abrahamsen, Duke University Convergence analysis of MCMC samplers for Bayesian linear mixed models with p N
From Kathie Leck
Faculty, staff, and students are required to read, understand and comply with the PSU Copyright Policy and all applicable law regarding materials uploaded to Kaltura. Failure to comply with law and PSU policy may result in individual liability.This Policy can be found here.
Full Transcript File located below Media Player
Convergence analysis of MCMC samplers for Bayesian linear mixed models with p > N
For the Bayesian version of the General Linear Mixed Model (GLMM), it is common to assign conditionally conjugate priors to the unknown model parameters. This results in a posterior density that can be explored using a simple two-block Gibbs sampler. It has been shown that when the priors are proper and the X matrix has full column rank, the Markov chains underlying these Gibbs samplers are almost always geometrically ergodic. We generalize this result by allowing for improper priors on the variance components, and, more importantly, by removing all assumptions on the X matrix. We also analyze a Bayesian GLMM where we replace the standard multivariate normal prior on the fixed effects coefficients with a Normal-Gamma shrinkage prior. In this case, our convergence results are for a hybrid sampler that utilizes both a deterministic step and a random scan step.
For both of these models, we derive easily satisfied conditions guaranteeing geometric ergodicity for the corresponding MCMC algorithms. Geometric ergodicity plays a key role in establishing central limit theorems (CLTs) for MCMC-based estimators. Thus, our results are important from a practical standpoint since all of the standard methods of calculating valid asymptotic standard errors are based on the existence of a CLT.