Sentence-level grammatical error identification as sequence-to-sequence correction
MetadataShow full item record
CitationSchmaltz, Allen, Yoon Kim, Alexander M. Rush, Stuart M. Shieber. 2016. Sentence-level grammatical error identification as sequence-to-sequence correction. Proceedings of the Eleventh Workshop on Innovative Use of NLP for Building Educational Applications, NAACL HLT, San Diego, California, June 16, 2016.
AbstractWe demonstrate that an attention-based encoder-decoder model can be used for sentence-level grammatical error identification for the Automated Evaluation of Scientific Writing (AESW) Shared Task 2016. The attention-based encoder-decoder models can be used for the generation of corrections, in addition to error identification, which is of interest for certain end-user applications. We show that a character-based encoder-decoder model is particularly effective, outperforming other results on the AESW Shared Task on its own, and showing gains over a word-based counterpart. Our final model— a combination of three character-based encoder-decoder models, one word-based encoder-decoder model, and a sentence-level CNN—is the highest performing system on the AESW 2016 binary prediction Shared Task.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:27266472
- FAS Scholarly Articles