Publication

Learning Consistent Deep Generative Models from Sparse Data via Prediction Constraints

ABSTRACT:

We develop a new framework for learning variational autoencoders and other deep generative models that balances generative and discriminative goals. Our framework optimizes model parameters to maximize a variational lower bound on the likelihood of observed data, subject to a task-specific prediction constraint that prevents model misspecification from leading to inaccurate predictions. We further enforce a consistency constraint, derived naturally from the generative model, that requires predictions on reconstructed data to match those on the original data. We show that these two contributions — prediction constraints and consistency constraints — lead to promising image classification performance, especially in the semi-supervised scenario where category labels are sparse but unlabeled data is plentiful. Our approach enables advances in generative modeling to directly boost semi-supervised classification performance, an ability we demonstrate by augmenting deep generative models with latent variables capturing spatial transformations.

Information about the publication

https://issai.nu.edu.kz/wp-content/uploads/2020/12/ISSAI-website-magazine-covers6.jpg

Authors:

Gabriel Hope, Madina Abdrakhmanova, Xiaoyin Chen, Michael C. Hughes, Michael C. Hughes, Erik B. Sudderth
PDF