Browse by topic
Type of publication
Dialogue Act Annotation of a Multiparty Meeting Corpus with Discriminative Models. Proceedings of IberSpeech 2016, 2016. pp. 241-250.Dialogue Act annotation is one of the main tasks in the development of dialogue systems. In order to simplify manual annotation, which is hard and expensive, statistical models can be used to provide a draft annotation to speed up the process. Recently, discriminative statistical models such as N-Gram Transducers and Conditional Random Fields have shown a good performance in the draft Dialogue Act annotation of dialogues with two participants, but no comparison of these models in multiparty dialogues has been done until this moment. This work reports the comparison of these two discriminative models in the popular AMI multiparty meetings corpus. Our results show that in this type of corpus Conditional Random Fields present a better performance in Dialogue Act annotation, contrarily to what has been previously reported for dialogues with only two participants.