CODET: A Benchmark for Contrastive Dialectal Evaluation of Machine Translation

Abstract

Neural machine translation (NMT) systems exhibit limited robustness in handling source-side linguistic variations. Their performance tends to degrade when faced with even slight deviations in language usage, such as different domains or variations introduced by second-language speakers. It is intuitive to extend this observation to encompass dialectal variations as well, but the work allowing the community to evaluate MT systems on this dimension is limited. To alleviate this issue, we compile and release CODET, a contrastive dialectal benchmark encompassing 882 different variations from nine different languages. We also quantitatively demonstrate the challenges large MT models face in effectively translating dialectal variants. We are releasing all code and data.

Publication
Findings of EACL 2024
Md Mahfuz Ibn Alam
Md Mahfuz Ibn Alam
PhD Student

I work on robustness

Sina Ahmadi
Sina Ahmadi
Postdoctoral Researcher → Postdoc@U.Zurich
Antonios Anastasopoulos
Antonios Anastasopoulos
Assistant Professor

I work on multilingual models, machine translation, speech recognition, and NLP for under-served languages.

Related