Author:
Huang Kuo-Liang,Duan Sheng-Feng,Lyu Xi
Abstract
New types of artificial intelligence products are gradually transferring to voice interaction modes with the demand for intelligent products expanding from communication to recognizing users' emotions and instantaneous feedback. At present, affective acoustic models are constructed through deep learning and abstracted into a mathematical model, making computers learn from data and equipping them with prediction abilities. Although this method can result in accurate predictions, it has a limitation in that it lacks explanatory capability; there is an urgent need for an empirical study of the connection between acoustic features and psychology as the theoretical basis for the adjustment of model parameters. Accordingly, this study focuses on exploring the differences between seven major “acoustic features” and their physical characteristics during voice interaction with the recognition and expression of “gender” and “emotional states of the pleasure-arousal-dominance (PAD) model.” In this study, 31 females and 31 males aged between 21 and 60 were invited using the stratified random sampling method for the audio recording of different emotions. Subsequently, parameter values of acoustic features were extracted using Praat voice software. Finally, parameter values were analyzed using a Two-way ANOVA, mixed-design analysis in SPSS software. Results show that gender and emotional states of the PAD model vary among seven major acoustic features. Moreover, their difference values and rankings also vary. The research conclusions lay a theoretical foundation for AI emotional voice interaction and solve deep learning's current dilemma in emotional recognition and parameter optimization of the emotional synthesis model due to the lack of explanatory power.
Funder
Chongqing Municipal Education Commission
Reference123 articles.
1. Cross linguistic interpretation of emotional prosody;Abelin,2000
2. Effects of pitch and speech rate on personal attributions;Apple;J. Personal. Soc. Psychol.,1979
3. Perception of loudness is influenced by emotion;Asutay;PLoS ONE,2012
4. Expressive speech synthesis: evaluation of a voice quality centered coder on the different acoustic dimensions;Audibert;Proc. Speech Prosody: Citeseer,2006
5. Superimposition of speaking voice characteristics and phonetograms in untrained and trained vocal groups;Awan;J. Voice,1993
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献