Imagine a classroom in the future where teachers are working alongside artificial intelligence partners to ensure no student gets left behind.
This is part of a larger vision of future classrooms where human instruction and AI technology interact to improve educational environments and the learning experience.
The research will play a critical role in helping ensure the AI agent is a natural partner in the classroom, with language and vision capabilities, allowing it to not only hear what the teacher and each student is saying, but also notice gestures (pointing, shrugs, shaking a head), eye gaze, and facial expressions (student attitudes and emotions).
For the past five years, we have been working to create a multimodal embodied avatar system, called “Diana”, that interacts with a human to perform various tasks.
The initial goal in the first year is to have the AI partner passively following the different students, in the way they’re talking and interacting, and then eventually the partner will learn to intervene to make sure that everyone is equitably represented and participating in the classroom.
In a classroom interaction, Diana could help with guiding students through lesson plans, through dialogue and gesture, while also monitoring the students’ progress, mood, and levels of satisfaction or frustration.
This is exciting new research that is starting to answer questions about using our avatar and agent technology with students in the classroom.