Fluent multimodal behavior realization for virtual humans

Abstract: Human conversations are highly dynamic, responsive interactions. In such interactions, utterances are produced incrementally, subject to on-the-fly adaptation and (self) interruptions. I'm interested in developing user interfaces with virtual humans that allow such a fluent interaction with a human user. My current research focuses both on the general architecture design of such user interfaces and specifically on enabling multimodal behavior generation that allows fluent interaction.

To this end we have developed AsapRealizer, a BML behavior (= synchronized speech, gesture, facial expression, ...) realizer which has unique capabilities that enable an intuitive and human-like fluent interaction with virtual humans: it provides a flexible behavior plan representation, adaptation and interruption of ongoing behavior, and allows the behavior to be composed out of small increments so that it can be realized with low latency. During this talk I will introduce AsapRealizer, give an overview of its capabilities for fluent behavior realization and show some applications that have profited from it. I will also discuss the behavior and intent planning possibilities enabled by AsapRealizer and show some of our recent work on this.

Presenter: Herwin van Welbergen is a PostDoc in the Social Cognitive Systems Group at Bielefeld University since receiving his PhD from Twente in 2011. Herwin recently moved to Hamburg and we hope to be able to engage in more scientific collaboration in the future.
 
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding Foswiki? Send feedback