This page was created by Manish Tyagi.
Enhancing Teacher Communication Through Objective Facial Affect and Gaze Analysis
Participant - Ryan Scherber
The art and science of teaching music relies upon the ability of educators to emote both expressivity and intent. Given the nature of musical performance, direct verbal instruction is often not possible and the educator must rely upon facial affect, gesture, and gaze to elicit the intended behavioral and musical responses. However, many future teachers struggle to reconcile the disparity between intended and displayed nonverbal signals (Scherber, Darrow, Veronee, in press). To combat this disparity, young educators are often asked to review video recordings of their teaching demonstrations to identify potential issues. However, without objective guidance or detailed analysis of affect, gesture, and gaze, these future teachers may struggle to understand how to improve their skillset. In this study, I plan to supplement traditional instruction with Noldus FaceReader software. Via an algorithm (Cootes and Taylor, 2000), FaceReader analyzes 500 key points in facial expressions to determine affect classification (e.g., happy vs. sad), valence (spectrum of positive to negative), and arousal (e.g., active vs. inactive). This objective and highly-specific feedback provided by the software may assist students to better understand content and more quickly improve their skills. As music teachers with strong nonverbal communication have been found to be more effective (Johnson, Darrow, & Eason, 2008), this remains important element of teacher education curriculum. As such, I believe future music educators who receive consistent objective and specific feedback via FaceReader will be better prepared for their future career.