A Flexible Stochastic Method for Solving the MAP Problem in Topic Models
Abstract
The estimation of the posterior distribution is the core problem in topic models, unfortunately it is intractable. There are approximation and sampling methods proposed to solve it. However, most of them do not have any clear theoretical guarantee of neither quality nor rate of convergence. Online Maximum a Posteriori Estimation (OPE) is another approach with concise guarantee on quality and convergence rate, in which we cast the estimation of the posterior distribution into a non-convex optimization problem. In this paper, we propose a more general and flexible version of OPE, namely Generalized Online Maximum a Posteriori Estimation (G-OPE), which not only enhances the flexibility of OPE in different real-world situations but also preserves key advantage theoretical characteristics of OPE when comparing to the state-of-the-art methods. We employ G-OPE as inference a document within large text corpora. The experimental and theoretical results show that our new approach performs better than OPE and other state-of-the-art methods.
Keywords
Topic models, posterior inference, Online MAP estimation, large-scale learning, non-convex optimization