Emotion-sensing computer software
that models and responds to students’ cognitive and emotional states—including
frustration and boredom—has been developed by University of Notre Dame
assistant professor of psychology Sidney D’Mello, Art Graesser from the University of Memphis, and a colleague from
Massachusetts Institute of Technology. D’Mello also is a concurrent assistant
professor of computer science and engineering.
The new technology, which matches
the interaction of human tutors, not only offers tremendous learning
possibilities for students, but also redefines human-computer interaction.
AutoTutor and Affective
AutoTutor can gauge the student’s level of knowledge by asking probing
questions; analyzing the student’s responses to those questions; proactively
identifying and correcting misconceptions; responding to the student’s own
questions, gripes, and comments; and even sensing a student’s frustration or
boredom through facial expression and body posture and dynamically changing its
strategies to help the student conquer those negative emotions.
“Most of the 20th-century systems
required humans to communicate with computers through windows, icons, menus and
pointing devices,” says D’Mello, who specializes in human-computer interaction
and artificial intelligence in education.
“But humans have always
communicated with each other through speech and a host of nonverbal cues such
as facial expressions, eye contact, posture, and gesture. In addition to
enhancing the content of the message, the new technology provides information
regarding the cognitive states, motivation levels, and social dynamics of the
students.”
AutoTutor is an Intelligent
Tutoring System (ITS) that helps students learn complex
technical content in Newtonian physics, computer literacy and critical thinking
by holding a conversation in natural language; simulating teaching, and
motivational strategies of human tutors; modeling students’ cognitive states;
using its student model to dynamically tailor the interaction to individual
students; answering students’ questions; identifying and correcting
misconceptions; and keeping students engaged with images, animations, and
simulations. In addition to these capabilities, Affective AutoTutor adds
emotion-sensitive capabilities by monitoring facial features, body language,
and conversational cues; regulating negative states such as frustration and
boredom; and synthesizing emotions via the content of its verbal responses,
speech intonation, and facial expressions of an animated teacher.
D’Mello’s study, titled “AutoTutor and Affective AutoTutor: Learning by Talking with Cognitively and
Emotionally Intelligent Computers that Talk Back,” that details this new
technology will be published in special edition of ACM Transactions on Interactive Intelligent Systems that highlights innovative technology of
the last decade.
“Much like a gifted human tutor,
AutoTutor and Affective AutoTutor attempt to keep the student balanced between
the extremes of boredom and bewilderment by subtly modulating the pace,
direction, and complexity of the learning task,” D’Mello says.
Considerable empirical evidence
has shown that one-on-one human tutoring is extremely effective when compared
to typical classroom environments, and AutoTutor and Affective AutoTutor
closely model the pedagogical styles, dialogue patterns, language, and gestures
of human tutors. They are also one of the few ITSs that help learning by
engaging students in natural language dialogues that closely mirror human-human
tutorial dialogues.
Tested on more than 1,000
students, AutoTutor produces learning gains of approximately one letter grade—gains
that have proven to outperform novice human tutors and almost reach the bar of
expert human tutors.