Communications in Information and Systems
Volume 12 (2012)
Hybrid deterministic-stochastic gradient Langevin dynamics for Bayesian learning
Pages: 221 – 232
We propose a new algorithm to obtain Bayesian posterior distribution by a hybrid deterministic-stochastic gradient Langevin dynamics. To speed up convergence and reduce computational costs, it is common to use stochastic gradient method to approximate the full gradient by sampling a subset of the large dataset. Stochastic gradient methods make progress fast initially, however, they often become slow in the late stage as the iterations approach the desired solution. The conventional gradient methods converge better eventually however at the expense of evaluating the full gradient at each iteration. Our hybrid method has the advantages of both approaches for constructing the Bayesian posterior distribution. We prove that our algorithm converges based on the weak convergence methods, and illustrate numerically its effectiveness and improved accuracy.
Langevin dynamics, stochastic gradient, Bayesian learning
2010 Mathematics Subject Classification
34E05, 60J27, 93E20