Sequential Discrete Latent Variables for Language Modeling
AbstractWe introduce a variant of the variational RNN (VRNN) model with discrete latent states to increase interpretability in RNN-based language models. Finding that naively training the model results in the same posterior collapse phenomenon observed in many other autoregressive tasks, we take the special case of an HMM where exact inference is tractable and examine the optimization challenges in that setting. We learn that sampling to compute the optimization objective likely causes optimization of the inference network to be intractable. Since the exact ELBO can be computed in the case of an HMM, we train an inference network for an HMM generative model (without any posterior collapse), then initialize a VRNN using the HMM's parameters and inference network. We find that fine tuning this model and adding non-Markovian transitions between latent time steps lets the model approach an LSTM-based language model's performance, while maintaining a sparse discrete latent state.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:38811559
- FAS Theses and Dissertations