Imperial College London

Professor Lucia Specia

Faculty of EngineeringDepartment of Computing

Chair in Natural Language Processing
 
 
 
//

Contact

 

l.specia Website

 
 
//

Location

 

572aHuxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@unpublished{Caglayan:2020,
author = {Caglayan, O and Madhyastha, P and Specia, L},
publisher = {arXiv},
title = {Curious case of language generation evaluation metrics: a cautionary tale},
url = {http://arxiv.org/abs/2010.13588v1},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Automatic evaluation of language generation systems is a well-studied problemin Natural Language Processing. While novel metrics are proposed every year, afew popular metrics remain as the de facto metrics to evaluate tasks such asimage captioning and machine translation, despite their known limitations. Thisis partly due to ease of use, and partly because researchers expect to see themand know how to interpret them. In this paper, we urge the community for morecareful consideration of how they automatically evaluate their models bydemonstrating important failure cases on multiple datasets, language pairs andtasks. Our experiments show that metrics (i) usually prefer system outputs tohuman-authored texts, (ii) can be insensitive to correct translations of rarewords, (iii) can yield surprisingly high scores when given a single sentence assystem output for the entire test set.
AU - Caglayan,O
AU - Madhyastha,P
AU - Specia,L
PB - arXiv
PY - 2020///
TI - Curious case of language generation evaluation metrics: a cautionary tale
UR - http://arxiv.org/abs/2010.13588v1
UR - http://hdl.handle.net/10044/1/84556
ER -