How does Robot Feedback Affect Participant Affinity and Trust?
Robot tutor design recommendations
Team: Kunal Bhuwalka, Emily Gong
(October 2018 - December 2018)
human-robot interaction research study, human-computer interaction, python programming
ROBOTS have extensively been used in educational settings in a supporting role, especially in programming. In this paper, we investigate the role of robots as tutors. More specifically, we investigate how a student’s affinity and trust towards a robot tutor is affected by the feedback received from the robot tutor. Our results indicate no statistically significant differences for affinity or trust. However, we believe this is partly due to the extremely small sample size. We also present some design guidelines for robot tutors based on qualitative analysis of survey data collected from 44 participants.
Design guidelines summary
According to our qualitative research, we were able to make some key assumptions that we hope can guide the advent of social tutoring robots. Based on 44 participants, we received several insights about human perception on robots in an authoritative, tutoring role.
The robot could benefit from:
Being introduced as an authority. 92% of participants trusted the computer screen more in terms of accuracy for test results.
Possessing anthropomorphic qualities. 53% of participants used “cute” to describe Cozmo in a positive context, with high affinity scores (not counting for CMU student bias.)
Showing distinct, clear emotional reactions. 45% of participants were still unsure of the type of reaction Cozmo was eliciting, especially due to conflation with the computer screen.