flexdiam is a state-of-the-art platform for managing dialogue between an embodied artificial agent and a human user. Developed by CoAI JRC members Stefan Kopp, Hendrik Buschmeier, and collaborators, flexdiam employs a multimodal approach to dialogue management, integrating verbal cues with non-verbal cues such as gestures and gaze direction. flexdiam is designed to handle naturalistic everyday human conversation — not only orderly turn-taking, but also interruptions, barge-ins, overly long verbose responses, and different repair mechanisms and strategies. flexdiam provides an essential foundation for cooperative and cognition-enabled AI that is able to learn in a step-by-step way through dialogue and feedback from a human partner.
To explore the capabilities of flexdiam, Kopp and colleagues conducted a series of studies with older adults and adults with cognitive impairments in collaboration with the v. Bodelschwingsche Stiftungen Bethel. The embodied agent “Billie” engaged in dialogue with subjects to help manage appointments and maintain daily structures. You can read more about some of the studies here, here and here.
In this brief demo Billie smoothly manages interruptions, self-corrections, and talking-over of the sort that occur in normal everyday human conversation.