Tampere University of Technology

TUTCRIS Research Portal

Multimodal and mobile conversational Health and Fitness Companions

Research output: Contribution to journalArticleScientificpeer-review

Details

Original languageEnglish
Pages (from-to)192-209
Number of pages18
JournalComputer Speech and Language
Volume25
Issue number2
DOIs
Publication statusPublished - Apr 2011
Publication typeA1 Journal article-refereed

Abstract

Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. This paper describes how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. We present concrete system architectures, virtual, physical and mobile multimodal interfaces, and interaction management techniques for such Companions. In particular how knowledge representation and separation of low-level interaction modelling from high-level reasoning at the domain level makes it possible to implement distributed, but still coherent, interaction with Companions. The distribution is enabled by using a dialogue plan to communicate information from domain level planner to dialogue management and from there to a separate mobile interface. The model enables each part of the system to handle the same information from its own perspective without containing overlapping logic, and makes it possible to separate task-specific and conversational dialogue management from each other. In addition to technical descriptions, results from the first evaluations of the Companions interfaces are presented.

Keywords

  • Cognitive modelling, Companions, Conversational spoken dialogue systems, Dialogue management, Embodied conversational agents, Mobile interfaces