Publicaciones

Advanced search

Abstract

Miguel Domingo, Mercedes García-Martínez, Alexandre Helle, Francisco Casacuberta, Manuel Herranz. How Much Does Tokenization Affect Neural Machine Translation?. Proceedings of the 20th International Conference on Computational Linguistics and Intelligent Text Processing, 2019.

Tokenization or segmentation is a wide concept that covers simple pro- cesses such as separating punctuation from words, or more sophisticated processes such as applying morphological knowledge. Neural Machine Translation (NMT) requires a limited-size vocabulary for computational cost and enough examples to estimate word embeddings. Separating punctuation and splitting tokens into words or subwords has proven to be helpful to reduce vocabulary and increase the number of examples of each word, improving the translation quality. Tokenization is more challenging when dealing with languages with no separator between words. In order to assess the impact of the tokenization in the quality of the final translation on NMT, we experimented on five tokenizers over ten language pairs. We reached the conclusion that the tokenization significantly affects the final translation quality and that the best tokenizer differs for different language pairs.