Full Text:   <2312>

CLC number: TP37

On-line Access: 2017-07-31

Received: 2016-01-27

Revision Accepted: 2016-05-22

Crosschecked: 2017-06-03

Cited: 0

Clicked: 6764

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Ping-ping Wu

http://orcid.org/0000-0001-7822-5208

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering  2017 Vol.18 No.7 P.955-967

http://doi.org/10.1631/FITEE.1600041


Spontaneous versus posed smile recognition via region-specific texture descriptor and geometric facial dynamics


Author(s):  Ping-ping Wu, Hong Liu, Xue-wu Zhang, Yuan Gao

Affiliation(s):  MOE Key Laboratory of Machine Perception, Peking University, Beijing 100871, China; more

Corresponding email(s):   pingpingwu@pku.edu.cn, hongliu@pku.edu.cn, zhangxuewu@sz.pku.edu.cn, yuan.gao@stu.uni-kiel.de

Key Words:  Facial landmark localization, Geometric feature, Appearance feature, Smile recognition


Ping-ping Wu, Hong Liu, Xue-wu Zhang, Yuan Gao. Spontaneous versus posed smile recognition via region-specific texture descriptor and geometric facial dynamics[J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(7): 955-967.

@article{title="Spontaneous versus posed smile recognition via region-specific texture descriptor and geometric facial dynamics",
author="Ping-ping Wu, Hong Liu, Xue-wu Zhang, Yuan Gao",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="18",
number="7",
pages="955-967",
year="2017",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.1600041"
}

%0 Journal Article
%T Spontaneous versus posed smile recognition via region-specific texture descriptor and geometric facial dynamics
%A Ping-ping Wu
%A Hong Liu
%A Xue-wu Zhang
%A Yuan Gao
%J Frontiers of Information Technology & Electronic Engineering
%V 18
%N 7
%P 955-967
%@ 2095-9184
%D 2017
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.1600041

TY - JOUR
T1 - Spontaneous versus posed smile recognition via region-specific texture descriptor and geometric facial dynamics
A1 - Ping-ping Wu
A1 - Hong Liu
A1 - Xue-wu Zhang
A1 - Yuan Gao
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 18
IS - 7
SP - 955
EP - 967
%@ 2095-9184
Y1 - 2017
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.1600041


Abstract: 
As a typical biometric cue with great diversities, smile is a fairly influential signal in social interaction, which reveals the emotional feeling and inner state of a person. Spontaneous and posed smiles initiated by different brain systems have differences in both morphology and dynamics. Distinguishing the two types of smiles remains challenging as discriminative subtle changes need to be captured, which are also uneasily observed by human eyes. Most previous related works about spontaneous versus posed smile recognition concentrate on extracting geometric features while appearance features are not fully used, leading to the loss of texture information. In this paper, we propose a region-specific texture descriptor to represent local pattern changes of different facial regions and compensate for limitations of geometric features. The temporal phase of each facial region is divided by calculating the intensity of the corresponding facial region rather than the intensity of only the mouth region. A mid-level fusion strategy of support vector machine is employed to combine the two feature types. Experimental results show that both our proposed appearance representation and its combination with geometry-based facial dynamics achieve favorable performances on four baseline databases: BBC, SPOS, MMI, and UvA-NEMO.

基于特定区域纹理描述和面部动态变化的自发性微笑判别技术

概要:微笑作为一种典型的生物多样性特征信号,在社会交往中有较大影响力,它揭示了人的情感感受和内心状态。自发性的微笑与假笑由不同大脑系统发出,在形态学和动力学上均存在差异。区分这两种类型的微笑仍具有挑战性,因为其中细微差别很难被肉眼观察到,仍有待被识别捕捉。已有相关研究大多是提取自发性微笑的几何特征,而外观特征并没有被充分利用,导致纹理信息的丢失。本文提出一种基于特定区域纹理描述来表示不同面部区域的局部模式变化,从而弥补几何特征研究的局限性。每个面部区域的时间相位是通过计算相应的面部区域强度来划分,而非仅考虑嘴巴区域强度。同时利用支持向量机的中层融合策略,将两种特征类型结合起来。实验结果表明,本文提出的外观表示法及其与基于几何形状的人脸动力学的结合技术,在BBC、SPOS、MMI和UvA-NEMO四个基准数据库中得到很好的应用。

关键词:面部特征定位;几何特征;外貌特征;笑容识别

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Ambadar, Z., Cohn, J.F., Ian Reed, L., 2009. All smiles are not created equal: morphology and timing of smiles perceived as amused, polite, and embarrassed/nervous. J. Nonverb. Behav., 33(1):17-34.

[2]Baron-Cohen, S., Ring, H.A., Bullmore, E.T., et al., 2000. The amygdala theory of autism. Neurosci. Biobehav. Rev., 24(3):355-364.

[3]Burgos-Artizzu, X.P., Perona, P., Dollár, P., 2013. Robust face landmark estimation under occlusion. Proc. IEEE Int. Conf. on Computer Vision, p.1513-1520.

[4]Calvo, M.G., Nummenmaa, L., 2011. Time course of discrimination between emotional facial expressions: the role of visual saliency. Vis. Res., 51(15):1751-1759.

[5]Calvo, M.G., Gutiérrez-García, A., Avero, P., et al., 2013. Attentional mechanisms in judging genuine and fake smiles: eye-movement patterns. Emotion, 13(4):792-802.

[6]Cao, X., Wei, Y., Wen, F., et al., 2014. Face alignment by explicit shape regression. Int. J. Comput. Vis., 107(2):177-190.

[7]Cohn, J.F., Schmidt, K.L., 2004. The timing of facial motion in posed and spontaneous smiles. Int. J. Wavel. Multiresol. Inform. Process., 2(2):121-132.

[8]Cootes, T.F., Edwards, G.J., Taylor, C.J., 2001. Active appearance models. IEEE Trans. Patt. Anal. Mach. Intell., 23(6):681-685.

[9]Dalal, N., Triggs, B., 2005. Histograms of oriented gradients for human detection. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.886-893.

[10]

[11]Dibekliouğlu, H., Valenti, R., Salah, A.A., et al., 2010. Eyes do not lie: spontaneous versus posed smiles. Proc. Int. Conf. on Multimedia, p.703-706.

[12]Dibekliouğlu, H., Salah, A., Gevers, T., 2012. Are you really smiling at me? Spontaneous versus posed enjoyment smiles. Proc. European Conf. on Computer Vision, p.525-538.

[13]Dibekliouğlu, H., Salah, A., Gevers, T., 2015. Recognition of genuine smiles. IEEE Trans. Multimed., 17(3):279-294.

[14]Dollár, P., Welinder, P., Perona, P., 2010. Cascaded pose regression. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1078-1085.

[15]Ekman, P., 2009. Telling Lies: Clues to Deceit in the Marketplace, Politics, and Marriage. W. W. Norton & Company, New York, p.140-143.

[16]Ekman, P., Rosenberg, E.L., 1997. What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS). Oxford University Press.

[17]Frank, M.G., Ekman, P., 1993. Not all smiles are created equal: the differences between enjoyment and nonenjoyment smiles. Humor, 6(1):9-26.

[18]Hoque, M., McDuff, D., Picard, R., 2012. Exploring temporal patterns in classifying frustrated and delighted smiles. IEEE Trans. Affect. Comput., 3(3):323-334.

[19]Khokher, M.R., Bouzerdoum, A., Phung, S.L., 2014. Crowd behavior recognition using dense trajectories. Proc. Int. Conf. on Digital lmage Computing: Techniques and Applications, p.1-7.

[20]Le, V., Brandt, J., Lin, Z., et al., 2012. Interactive facial feature localization. Proc. European Conf. on Computer Vision, p.679-692.

[21]Li, W.S., Zhou, C.L., Xu, J.T., 2005. A novel face recognition method with feature combination. J. Zhejiang Univ.-Sci., 6A(5):454-459.

[22]Liu, H., Sun, X., 2016. A partial least squares based ranker for fast and accurate age estimation. Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing, p.2792-2796.

[23]Liu, H., Wu, P., 2012. Comparison of methods for smile deceit detection by training AU6 and AU12 simultaneously. Proc. IEEE Int. Conf. on Image Processing, p.1805-1808.

[24]Liu, H., Gao, Y., Wang, C., 2014. Gender identification in unconstrained scenarios using self-similarity of gradients features. Proc. IEEE Int. Conf. on Image Processing, p.5911-5915.

[25]Miehlke, A., Fisch, U., Eneroth, C.M., 1973. Surgery of the Facial Nerve. Saunders, Philadelphia.

[26]Ojala, T., Pietikäinen, M., Mäenpää, T., 2002. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Patt. Anal. Mach. Intell., 24(7):971-987.

[27]Peng, H., Long, F., Ding, C., 2005. Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy. IEEE Trans. Patt. Anal. Mach. Intell., 27(8):1226-1238.

[28]Pfister, T., Li, X., Zhao, G., et al., 2011a. Differentiating spontaneous from posed facial expressions within a generic facial expression recognition framework. Proc. IEEE Int. Conf. on Computer Vision Workshops, p.868-875.

[29]Pfister, T., Li, X., Zhao, G., et al., 2011b. Recognising spontaneous facial micro-expressions. Proc. IEEE Int. Conf. on Computer Vision, p.1449-1456.

[30]Rinn, W.E., 1984. The neuropsychology of facial expression: a review of the neurological and psychological mechanisms for producing facial expressions. Psychol. Bull., 95(1):52-77.

[31]Sariyanidi, E., Gunes, H., Cavallaro, A., 2015. Automatic analysis of facial affect: a survey of registration, representation, and recognition. IEEE Trans. Patt. Anal. Mach. Intell., 37(6):1113-1133.

[32]Shen, X.B., Wu, Q., Fu, X.L., 2012. Effects of the duration of expressions on the recognition of microexpressions. J. Zhejiang Univ.-Sci. B, 13(3):221-230.

[33]Valstar, M., Pantic, M., 2010. Induced disgust, happiness and surprise: an addition to the MMI facial expression database. Proc. 3rd Int. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect, p.65-70.

[34]Valstar, M.F., Gunes, H., Pantic, M., 2007. How to distinguish posed from spontaneous smiles using geometric features. Proc. Int. Conf. on Multimodal Interfaces, p.38-45.

[35]Wang, J., Yang, J., Yu, K., et al., 2010. Locality-constrained linear coding for image classification. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.3360-3367.

[36]Wang, X., Wang, L., Qiao, Y., 2012. A comparative study of encoding, pooling and normalization methods for action recognition. Proc. Asian Conf. on Computer Vision, p.572-585.

[37]Whitehill, J., Bartlett, M.S., Movellan, J.R., 2013. Automatic facial expression recognition. In: Gratch, J., Marsella, S. (Eds.), Social Emotions in Nature Artifact. Oxford Scholarship Online. href[doi:10.1093/acprof:oso/9780195387643.003.0007http://dx.doi.org/10.1093/acprof:oso/9780195387643.] href[doi:10.1093/acprof:oso/9780195387643.003.0007003.0007]

[38]Wu, P.P., Liu, H., Zhang, X.W., 2014. Spontaneous versus posed smile recognition using discriminative local spatio-temporal descriptors. Proc. Int. IEEE Conf. on Acoustics, Speech and Signal Processing, p.1249-1253.

[39]Wu, Q., Shen, X.B., Fu, X.L., 2011. The machine knows what you are hiding: an automatic micro-expression recognition system. LNCS, 6975:152-162.

[40]Yang, J.C., Yu, K., Gong, Y.H., et al., 2009. Linear spatial pyramid matching using sparse coding for image classification. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, p.1794-1801.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2024 Journal of Zhejiang University-SCIENCE