This year at NeurIPS 2018 the Symposium on Advances in Approximate Bayesian Inference discussed challenges and advances in approximating probabilistic inference in rich models. It was a genuinely exciting program!
I was lucky enough to give an invited talk at the event.
-
Title: Debiasing Approximate Inference
-
Abstract:
At its heart, the field of approximate inference is about trade-offs between computation and estimation accuracy: when we approximate quantities such as the evidence or posterior expectations no randomness is left and given limitless computation budget all quantities can be evaluated exactly. But given finite computation, how do we select inference methods such that they provide accurate estimates of quantities of interest? In this talk I will argue for a more explicit consideration of bias-variance tradeoffs of common inference methods. In particular, I highlight that current inference methods such as variational inference and Markov Chain Monte Carlo make a particular bias-variance tradeoffs which may be suboptimal for our inferential question at hand. What can we do about this? There is a rich portfolio of methods to change bias-variance tradeoffs in the form of debiasing methods; I will provide a brief overview and demonstrate a number of recent successful applications of these methods to variational inference and stochastic gradient MCMC.
Here are the talk slides and a voice recording (I believe the symposium organizers plan to eventually release a video recording).