Continuous shrinkage priors for fixed and random e↵ects selection in linear mixed models: application to genetic mapping
Abstract
The identification of random factors to include in a linear mixed model is crucial for modeling dependence structures while avoiding over-fitting. Random e↵ects selection can be achieved by shrinking non-relevant variance parameters towards zero. We propose extending the horseshoe prior for variance components selection in a folded version. Motivated by two applications, the folded-horseshoe prior is evaluated either in a genetic breeding or in a functional mapping context. In the latter, we use a polar parametrization of the correlation matrix of random e↵ects, using sinusoidal priors for angular parameters. Finally, we design e cient MCMC algorithms taking advantage of Kronecker product properties. From a statistical point of view, we show that the folded-horseshoe prior outperforms the folded-Cauchy when the number of parameters is close to the sample size. For variance component selection, it performs as well as the folded-spike-and-slab but it is computationally more e cient. We also show the impact of erroneous dependence structures assumptions on the selection and the estimation of variance components. From a genetic point of view, the numerical results highlight the e ciency of the folded-horseshoe prior. In particular, this prior selects molecular markers already identified in these data but also new markers. Finally, we discuss how and why linear mixed models are an interesting alternative to usual functional mapping approaches.
Origin | Files produced by the author(s) |
---|