Imperial College London

Professor Lucia Specia

Faculty of EngineeringDepartment of Computing

Chair in Natural Language Processing
 
 
 
//

Contact

 

l.specia Website

 
 
//

Location

 

572aHuxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Fomicheva:2019:10.1162/coli_a_00356,
author = {Fomicheva, M and Specia, L},
doi = {10.1162/coli_a_00356},
journal = {Computational Linguistics},
pages = {515--558},
title = {Taking MT evaluation metrics to extremes: beyond correlation with human judgments},
url = {http://dx.doi.org/10.1162/coli_a_00356},
volume = {45},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Automatic Machine Translation (MT) evaluation is an active field of research, with a handful of new metrics devised every year. Evaluation metrics are generally benchmarked against manual assessment of translation quality, with performance measured in terms of overall correlation with human scores. Much work has been dedicated to the improvement of evaluation metrics to achieve a higher correlation with human judgments. However, little insight has been provided regarding the weaknesses and strengths of existing approaches and their behavior in different settings. In this work we conduct a broad meta-evaluation study of the performance of a wide range of evaluation metrics focusing on three major aspects. First, we analyze the performance of the metrics when faced with different levels of translation quality, proposing a local dependency measure as an alternative to the standard, global correlation coefficient. We show that metric performance varies significantly across different levels of MT quality: Metrics perform poorly when faced with low-quality translations and are not able to capture nuanced quality distinctions. Interestingly, we show that evaluating low-quality translations is also more challenging for humans. Second, we show that metrics are more reliable when evaluating neural MT than the traditional statistical MT systems. Finally, we show that the difference in the evaluation accuracy for different metrics is maintained even if the gold standard scores are based on different criteria.
AU - Fomicheva,M
AU - Specia,L
DO - 10.1162/coli_a_00356
EP - 558
PY - 2019///
SN - 0891-2017
SP - 515
TI - Taking MT evaluation metrics to extremes: beyond correlation with human judgments
T2 - Computational Linguistics
UR - http://dx.doi.org/10.1162/coli_a_00356
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000489035700004&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - https://www.mitpressjournals.org/doi/full/10.1162/coli_a_00356
UR - http://hdl.handle.net/10044/1/79480
VL - 45
ER -