Co-Constructive Task Learning

A visualization of a co-constructive interaction between a human tutor and a robot learner

The Co-constructive Task Learning project (CCTL) focuses on interactions in which a robot learns a new task or skill from a human tutor. Organized by CoAI JRC members Britta Wrede, Anna-Lisa Vollmer and Michael Beetz, CCTL takes its inspiration from the way human children learn from parents, which paradigmatically involves co-construction: both parent and child play an active role in shaping the meaning, goals, and structure of the learning interaction. The objective of CCTL is to develop an analogous co-constructive task learning approach for human-robot learning interactions. In a co-constructive approach, human tutor and robot learner give each other feedback to establish and preserve a shared understanding of the task, its purpose, and the steps needed to carry it out. The co-constructive approach offers a promising path towards achieving the CoAI JRC research objective of developing AI that learns from humans the way humans learn from each other. You can read more in a recent white paper by the CCTL team.

In a co-constructive learning interaction, the human tutor needs to communicate the task to the robot learner, monitor feedback to infer the robot’s current understanding, and provide scaffolding to help the robot progress. The robot, in turn, needs to estimate the tutor’s model of the task, and use feedback to bring its own model into agreement, asking for help or clarification as needed. An important challenge for CCTL is to develop a framework for verbal and non-verbal communication between robot and human that can support this co-constructive process.

Using VR to teach robots new tasks

Human parents adapt their motions and gestures when teaching a task to a child, but robots have difficulty tracking human action demonstrations in the real world. To address this, CCTL exploits physically realistic simulated environments in which the robot learns using virtual demonstrations from a human tutor.

This approach prompts new research questions: How do humans naturally interact with robots in VR? And how does this influence their teaching behavior? And with respect to co-constructive teaching: How can robots benefit from co-constructed input and be transparent about what they understand? These are among the questions the CCTL project tackles.

A human tutor demonstrating a pouring action using VR controllers
A human tutor demonstrates a pouring task to virtual robot learner in VR

Virtual demonstrations like the one in this video provide the robot with narrative enabled episodic memories (NEEMs) that form the basis of a generalized understanding of the task.