Sequential Discrete Latent Variables for Language Modeling
Abstract
We introduce a variant of the variational RNN (VRNN) model with discrete latent states to increase interpretability in RNN-based language models. Finding that naively training the model results in the same posterior collapse phenomenon observed in many other autoregressive tasks, we take the special case of an HMM where exact inference is tractable and examine the optimization challenges in that setting. We learn that sampling to compute the optimization objective likely causes optimization of the inference network to be intractable. Since the exact ELBO can be computed in the case of an HMM, we train an inference network for an HMM generative model (without any posterior collapse), then initialize a VRNN using the HMM's parameters and inference network. We find that fine tuning this model and adding non-Markovian transitions between latent time steps lets the model approach an LSTM-based language model's performance, while maintaining a sparse discrete latent state.Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:38811559
Collections
- FAS Theses and Dissertations [6136]
Contact administrator regarding this item (to report mistakes or request changes)