On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference
MetadataShow full item record
CitationBelinkov, Yonatan, Adam Poliak, Stuart M. Shieber, Benjamin Van Durme, and Alexander Rush. 2019. On Adversarial Removal of Hypothesis-only Bias in Natural Language Inference. In the Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM19), Minneapolis, MN, June 6-7, 2019.
AbstractPopular Natural Language Inference (NLI) datasets have been shown to be tainted by hypothesis-only biases. Adversarial learning may help models ignore sensitive biases and spurious correlations in data. We evaluate whether adversarial learning can be used in NLI to encourage models to learn representa- tions free of hypothesis-only biases. Our analyses indicate that the representations learned via adversarial learning may be less biased, with only small drops in NLI accuracy.
Citable link to this pagehttp://nrs.harvard.edu/urn-3:HUL.InstRepos:40827358
- FAS Scholarly Articles