Options
Emotion recognition and confidence ratings predicted by vocal stimulus type and acoustic parameters
Date Issued
2019
Author(s)
DOI
10.31234/osf.io/kqy2n
Abstract
Our speech expresses emotional meaning, not only through words, but also through certain attributes of our voice, such as pitch or loudness. These prosodic attributes are well-documented within the vocal emotion literature. However, there is considerable variability in the types of stimuli and procedures used to examine their influence on emotion recognition. In addition, the confidence we have in our assessments of another person’s emotional state has been argued to strongly influence performance accuracy in emotion recognition tasks. Nevertheless, such associations have rarely been studied previously. We addressed this knowledge gap by examining the impact of vocal stimulus type and prosodic speech attributes on emotion recognition and a person’s confidence in a given response. We analyzed a total of 1038 emotional expressions spoken in an angry, disgusted, fearful, happy, neutral, sad and surprised tone of voice according to a baseline set of prosodic acoustic parameters (N = 13). Two classification procedures (linear discriminant analysis and random forest) established that these acoustic measures provided sufficient discrimination between expressions of emotional categories to permit accurate statistical classification. Logistic regression- and linear models showed that emotion recognition and confidence judgments essentially depended on stimulus material as they could be predicted by different constellations of acoustic features. Results also demonstrated that emotional expressions which were correctly identified elicited confident judgments. Together, these findings extend previous work by showing that vocal stimulus type and prosodic attributes of speech strongly influence emotion recognition and listeners’ confidence in a given response.