Publication:
Adapting Sequence Models for Sentence Correction

No Thumbnail Available

Date

2017

Published Version

Journal Title

Journal ISSN

Volume Title

Publisher

Association for Computational Linguistics
The Harvard community has made this article openly available. Please share how this access benefits you.

Research Projects

Organizational Units

Journal Issue

Citation

Schmaltz, Allen, Yoon Kim, Alexander M. Rush, and Stuart M. Shieber. “Adapting Sequence Models for Sentence Correction.” In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, 2807–13. Denmark, September 2-11, 2017.

Research Data

Abstract

In a controlled experiment of sequence-to-sequence approaches for the task of sentence correction, we find that character-based models are generally more effective than word-based models and models that encode subword information via convolutions, and that modeling the output data as a series of diffs improves effectiveness over standard approaches. Our strongest sequence-to-sequence model improves over our strongest phrase-based statistical machine translation model, with access to the same data, by $6 M^2$ (0.5 GLEU) points. Additionally, in the data environment of the standard CoNLL-2014 setup, we demonstrate that modeling (and tuning against) diffs yields similar or better $M^2$ scores with simpler models and/or significantly less data than previous sequence-to-sequence approaches.

Description

Citation: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2807–2813, Copenhagen, Denmark, September 7–11, 2017.

Other Available Sources

Keywords

Terms of Use

This article is made available under the terms and conditions applicable to Other Posted Material (LAA), as set forth at Terms of Service

Endorsement

Review

Supplemented By

Referenced By

Related Stories