figshare
Browse
DOCUMENT
uasa_a_1876710_sm0106.pdf (1.3 MB)
ARCHIVE
uasa_a_1876710_sm9099.gz (15 kB)
1/0
2 files

Neuronized Priors for Bayesian Sparse Linear Regression

Version 2 2021-07-26, 19:20
Version 1 2021-01-20, 18:30
dataset
posted on 2021-07-26, 19:20 authored by Minsuk Shin, Jun S. Liu

Although Bayesian variable selection methods have been intensively studied, their routine use in practice has not caught up with their non-Bayesian counterparts such as Lasso, likely due to difficulties in both computations and flexibilities of prior choices. To ease these challenges, we propose the neuronized priors to unify and extend some popular shrinkage priors, such as Laplace, Cauchy, horseshoe, and spike-and-slab priors. A neuronized prior can be written as the product of a Gaussian weight variable and a scale variable transformed from Gaussian via an activation function. Compared with classic spike-and-slab priors, the neuronized priors achieve the same explicit variable selection without employing any latent indicator variables, which results in both more efficient and flexible posterior sampling and more effective posterior modal estimation. Theoretically, we provide specific conditions on the neuronized formulation to achieve the optimal posterior contraction rate, and show that a broadly applicable MCMC algorithm achieves an exponentially fast convergence rate under the neuronized formulation. We also examine various simulated and real data examples and demonstrate that using the neuronization representation is computationally more or comparably efficient than its standard counterpart in all well-known cases. An R package NPrior is provided for using neuronized priors in Bayesian linear regression.

Funding

This research is supported in part by the NSF grants DMS-1903139, DMS-2015528, and DMS-2015411.

History

Usage metrics

    Journal of the American Statistical Association

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC