Adapting Sequence Models for Sentence Correction
Published Version
https://doi.org/10.18653/v1/d17-1298Metadata
Show full item recordCitation
Schmaltz, Allen, Yoon Kim, Alexander M. Rush, and Stuart M. Shieber. “Adapting Sequence Models for Sentence Correction.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2807–13. Denmark, September 2-11, 2017.Abstract
In a controlled experiment of sequence-to-sequence approaches for the task of sentence correction, we find that character-based models are generally more effective than word-based models and models that encode subword information via convolutions, and that modeling the output data as a series of diffs improves effectiveness over standard approaches. Our strongest sequence-to-sequence model improves over our strongest phrase-based statistical machine translation model, with access to the same data, by $6 M^2$ (0.5 GLEU) points. Additionally, in the data environment of the standard CoNLL-2014 setup, we demonstrate that modeling (and tuning against) diffs yields similar or better $M^2$ scores with simpler models and/or significantly less data than previous sequence-to-sequence approaches.Terms of Use
This article is made available under the terms and conditions applicable to Other Posted Material, as set forth at http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAACitable link to this page
http://nrs.harvard.edu/urn-3:HUL.InstRepos:42519071
Collections
- FAS Scholarly Articles [17582]
Contact administrator regarding this item (to report mistakes or request changes)