Variational Laplace Autoencoders

Yookoon Park, Chris Donjoo Kim, Gunhee Kim

ICML 2019


Variational autoencoders (Kingma & Welling,2014) employ an amortized inference model to approximate the posterior of latent variables. However, such amortized variational inference (AVI) faces two challenges: (1) limited expressiveness of the fully-factorized Gaussian posterior assumption and (2) the amortization error of the inference model. We propose an extended model named Variational Laplace Autoencoders that overcome both challenges to improve the training of deep generative models. Specifically, we start from a class of neural networks with rectified linear activation and Gaussian output and create a connection to probabilistic PCA. As a result, we derive iterative update equations that discover the mode of the posterior and define a local full-covariance Gaussian approximation centered on it. From the perspective of Laplace approximation, a generalization to a differentiable class of output distributions and activation functions is presented. Empirical results on MNIST, OMNIGLOT, Fashion-MNIST, SVHN and CIFAR10 show that the proposed approach significantly outperforms other recent amortized or iterative methods

Leave a Reply

Your email address will not be published. Required fields are marked *