Browse by topic
Evaluation of HMM-based models for the annotation of unsegmented dialogue turns. Proceedings of the The seventh international conference on Language Resources and Evaluation (LREC), 2010. pp. 1608-1613. European Language Resources Association (ELRA).Corpus-based dialogue systems rely on statistical models, whose parameters are inferred from annotated dialogues. The dialogues are usually annotated using Dialogue Acts (DA), and the manual annotation is difficult and time-consuming. Therefore, several semi-automatic annotation processes have been proposed to speed-up the process. The standard annotation model is based on Hidden Markov Models (HMM). In this work, we explore the impact of different types of HMM on annotation accuracy using these models on two dialogue corpora of dissimilar features. The results show that some types of models improve standard HMM in a human-computer task-oriented dialogue corpus, but their impact is lower in a human-human non-task-oriented dialogue corpus.