Full Text:   <2581>

Summary:  <1540>

CLC number: TP391.4

On-line Access: 2024-08-27

Received: 2023-10-17

Revision Accepted: 2024-05-08

Crosschecked: 2019-07-12

Cited: 0

Clicked: 5873

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Le-kai Zhang

http://orcid.org/0000-0002-8136-5882

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2019 Vol.20 No.7 P.964-974

http://doi.org/10.1631/FITEE.1800101


Using psychophysiological measures to recognize personal music emotional experience


Author(s):  Le-kai Zhang, Shou-qian Sun, Bai-xi Xing, Rui-ming Luo, Ke-jun Zhang

Affiliation(s):  College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China; more

Corresponding email(s):   zlkzhang@zju.edu.cn, ssq@zju.edu.cn, sisyxing@gmail.com, joeluo@zju.edu.cn

Key Words:  Music, Emotion recognition, Physiological signals, Wavelet transform


Le-kai Zhang, Shou-qian Sun, Bai-xi Xing, Rui-ming Luo, Ke-jun Zhang. Using psychophysiological measures to recognize personal music emotional experience[J]. Frontiers of Information Technology & Electronic Engineering, 2019, 20(7): 964-974.

@article{title="Using psychophysiological measures to recognize personal music emotional experience",
author="Le-kai Zhang, Shou-qian Sun, Bai-xi Xing, Rui-ming Luo, Ke-jun Zhang",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="20",
number="7",
pages="964-974",
year="2019",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1800101"
}

%0 Journal Article
%T Using psychophysiological measures to recognize personal music emotional experience
%A Le-kai Zhang
%A Shou-qian Sun
%A Bai-xi Xing
%A Rui-ming Luo
%A Ke-jun Zhang
%J Frontiers of Information Technology & Electronic Engineering
%V 20
%N 7
%P 964-974
%@ 2095-9184
%D 2019
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1800101

TY - JOUR
T1 - Using psychophysiological measures to recognize personal music emotional experience
A1 - Le-kai Zhang
A1 - Shou-qian Sun
A1 - Bai-xi Xing
A1 - Rui-ming Luo
A1 - Ke-jun Zhang
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 20
IS - 7
SP - 964
EP - 974
%@ 2095-9184
Y1 - 2019
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1800101


Abstract: 
music can trigger human emotion. This is a psychophysiological process. Therefore, using psychophysiological characteristics could be a way to understand individual music emotional experience. In this study, we explore a new method of personal music emotion recognition based on human physiological characteristics. First, we build up a database of features based on emotions related to music and a database based on physiological signals derived from music listening including EDA, PPG, SKT, RSP, and PD variation information. Then linear regression, ridge regression, support vector machines with three different kernels, decision trees, >k-nearest neighbors, multi-layer perceptron, and Nu support vector regression (NuSVR) are used to recognize music emotions via a data synthesis of music features and human physiological features. NuSVR outperforms the other methods. The correlation coefficient values are 0.7347 for arousal and 0.7902 for valence, while the mean squared errors are 0.023 23 for arousal and 0.014 85 for valence. Finally, we compare the different data sets and find that the data set with all the features (music features and all physiological features) has the best performance in modeling. The correlation coefficient values are 0.6499 for arousal and 0.7735 for valence, while the mean squared errors are 0.029 32 for arousal and 0.015 76 for valence. We provide an effective way to recognize personal music emotional experience, and the study can be applied to personalized music recommendation.

基于心理生理信号的个人音乐情感体验识别

摘要:音乐能激发人的情感,这是一个心理生理过程。因此心理生理特征可用于识别个人音乐情感体验。提出一种新的基于人体生理特征的个人音乐情感识别方法。首先,建立一个基于音乐情感特征的数据库和一个基于听音乐产生的生理信号的数据库,包括心电、脉搏、皮温、呼吸和瞳孔直径变化等生理信号。然后,分别采用线性回归、岭回归、三种不同核的支持向量机、决策树、K近邻算法、多层感知器和Nu支持向量回归(NuSVR)方法,通过音乐特征和人体生理特征识别音乐情感。结果显示,NuSVR性能优于其他方法,其唤醒相关系数为0.7347(均方差为0.023 23),效价相关系数为0.7902(均方差为0.014 85)。最后,对不同数据集进行比较,结果表明所有特征(音乐特征和所有生理特征)数据集在识别中表现最好,唤醒相关系数为0.6499(均方差为0.029 32),效价相关系数为0.7735(均方差为0.015 76)。本文提供了一种有效的个人音乐情感体验识别方法,可用于个性化音乐推荐。

关键词:音乐;情感识别;生理信号;小波变换

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Agrafioti F, Hatzinakos D, Anderson AK, 2012. ECG pattern analysis for emotion detection. IEEE Trans Affect Comput, 3(1):102-115.

[2]Ayadi ME, Kamel MS, Karray F, 2011. Survey on speech emotion recognition: features, classification schemes, and databases. Patt Recogn, 44(3):572-587.

[3]Bradley MM, Lang PJ, 1994. Measuring emotion: the self-assessment manikin and the semantic differential. J Behav Ther Exp Psych, 25(1):49-59.

[4]Chandler C, Cornes R, 2012. Biometric measurement of human emotions. TECHNIA - Int J Comput Sci Commun Technol, 4(2):164-168.

[5]Cheng B, Liu GY, 2008. Emotion recognition from surface EMG signal using wavelet transform and neural network. J Comput Appl, 28(2):333-335 (in Chinese).

[6]de Witte NAJ, Sütterlin S, Braet C, et al., 2017. Psychophysiological correlates of emotion regulation training in adolescent anxiety: evidence from the novel PIER task. J Affect Disord, 214:89-96.

[7]Gerdes ABM, Wieser MJ, Alpers GW, 2014. Emotional pictures and sounds: a review of multimodal interactions of emotion cues in multiple domains. Front Psychol, 5:1351.

[8]Katsis CD, Katertsidis NS, Fotiadis DI, 2011. An integrated system based on physiological signals for the assessment of affective states in patients with anxiety disorders. Biomed Signal Process Contr, 6(3):261-268.

[9]Kim J, André E, 2008. Emotion recognition based on physiological changes in music listening. IEEE Trans Patt Anal Mach Intell, 30(12):2067-2083.

[10]Koelstra S, Muhl C, Soleymani M, et al., 2012. DEAP: a database for emotion analysis; using physiological signals. IEEE Trans Affect Comput, 3(1):18-31.

[11]Krumhansl CL, 1997. An exploratory study of musical emotions and psychophysiology. Can J Exp Psychol, 51(4): 336-353.

[12]Li C, Xu C, Feng ZY, 2016. Analysis of physiological for emotion recognition with the IRS model. Neurocomputing, 178:103-111.

[13]Li SS, Walters G, Packer J, et al., 2017. A comparative analysis of self-report and psychophysiological measures of emotion in the context of tourism advertising. J Travel Res, 57(8):1078-1092.

[14]Lu Q, Chen XO, Yang DS, et al., 2010. Boosting for multi-modal music emotion classification. 11th Int Society for Music Information Retrieval Conf, p.105-110.

[15]Maia CLB, Furtado ES, 2016. A study about psychophysiological measures in user experience monitoring and evaluation. Proc 15th Brazilian Symp on Human Factors in Computing Systems, p.7.

[16]Mitterschiffthaler MT, Fu CHY, Dalton JA, et al., 2007. A functional MRI study of happy and sad affective states induced by classical music. Hum Brain Mapp, 28(11):1150-1162.

[17]Mori K, Iwanaga M, 2017. Two types of peak emotional responses to music: the psychophysiology of chills and tears. Sci Rep, 7:46063.

[18]Nyklíček I, Thayer JF, van Doornen LJP, 1997. Cardiorespiratory differentiation of musically-induced emotions. J Psychophysiol, 11(4):304-321.

[19]Pedregosa F, Varoquaux G, Gramfort A, et al., 2011. Scikit-learn: machine learning in Python. J Mach Learn Res, 12(10):2825-2830.

[20]Picard RW, Vyzas E, Healey J, 2001. Toward machine emotional intelligence: analysis of affective physiological state. IEEE Trans Patt Anal Mach Intell, 23(10):1175-1191.

[21]Rani P, Liu C, Sarkar N, et al., 2006. An empirical study of machine learning techniques for affect recognition in human-robot interaction. Patt Anal Appl, 9(1):58-69.

[22]Ren P, Barreto A, Gao Y, et al., 2013. Affective assessment by digital processing of the pupil diameter. IEEE Trans Affect Comput, 4(1):2-14.

[23]Tzanetakis G, Cook P, 2000. MARSYAS: a framework for audio analysis. Organ Sound, 4(3):169-175.

[24]Wagner J, Kim J, Andre E, 2005. From physiological signals to emotions: implementing and comparing selected methods for feature extraction and classification. IEEE Int Conf on Multimedia and Expo, p.940-943.

[25]Wang SH, Phillips P, Dong ZC, et al., 2018. Intelligent facial emotion recognition based on stationary wavelet entropy and Jaya algorithm. Neurocomputing, 272:668-676.

[26]Wen WS, Liu GY, Cheng NP, et al., 2014. Emotion recognition based on multi-variant correlation of physiological signals. IEEE Trans Affect Comput, 5(2):126-140.

[27]Xing BX, Zhang KJ, Sun SQ, et al., 2015. Emotion-driven Chinese folk music-image retrieval based on DE-SVM. Neurocomputing, 148:619-627.

[28]Yang YH, Chen HH, 2011. Prediction of the distribution of perceived music emotions using discrete samples. IEEE Trans Audio Speech Lang Process, 19(7):2184-2196.

[29]Yun T, Guan L, 2013. Human emotional state recognition using real 3D visual features from Gabor library. Patt Recogn, 46(2):529-538.

[30]Zhang YD, Yang ZJ, Lu HM, et al., 2016. Facial emotion recognition based on biorthogonal wavelet entropy, fuzzy support vector machine, and stratified cross validation. IEEE Access, 4:8375-8385.

[31]Zhu X, 2010. Emotion recognition of EMG based on BP neural network. Proc 2nd Int Symp on Networking and Network Security, p.227-229.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE