Affiliation:
1. Institute of Liberal Arts and Science, Kanazawa University, Ishikawa, Japan
2. Faculty of Psychology, Doshisha University, Kyoto, Japan
Abstract
This research concerns three channels for emotional communication: voice, semantics, and facial expressions. We used speech in which the emotion in voice and semantics did not match, and we investigated the dominant modality and how they interact with facial expressions. The study used voices emoting anger, happiness, or sadness while saying, “I’m angry,” “I’m pleased,” or “I’m sad.” A facial image accompanied the voice, and it expressed either the same emotion to the voice (voice = face condition), the same emotion to the semantics (semantic = face condition), or a mixed emotion shown in the voice and semantics (morph condition). The phrases were articulated in the participants’ native language (Japanese), second language (English), and unfamiliar language (Khmer). In Study 1, participants answered how much they agreed that the speaker expressed anger, happiness, and sadness. Their attention was not controlled. In Study 2, participants were told to attend to either voice or semantics. The morph condition of study 1 found semantic dominance for the native language stimuli. The semantic = face and voice = face conditions in Studies 1 and 2 revealed that an emotion solely expressed in semantics (while a different emotion was shown in face and voice) had more substantial impacts on assessing the speaker’s emotion than an emotion solely expressed in voice when the semantics were in understandable languages.
Funder
Japan Society for the Promotion of Science