Emotion recognition from thermal infrared images using deep Boltzmann machine

Shangfei WANG, Menghua HE, Zhen GAO, Shan HE, Qiang JI

PDF(527 KB)
PDF(527 KB)
Front. Comput. Sci. ›› 2014, Vol. 8 ›› Issue (4) : 609-618. DOI: 10.1007/s11704-014-3295-3
RESEARCH ARTICLE

Emotion recognition from thermal infrared images using deep Boltzmann machine

Author information +
History +

Abstract

Facial expression and emotion recognition from thermal infrared images has attracted more and more attentions in recent years. However, the features adopted in current work are either temperature statistical parameters extracted from the facial regions of interest or several hand-crafted features that are commonly used in visible spectrum. Till now there are no image features specially designed for thermal infrared images. In this paper, we propose using the deep Boltzmann machine to learn thermal features for emotion recognition from thermal infrared facial images. First, the face is located and normalized from the thermal infrared images. Then, a deep Boltzmann machine model composed of two layers is trained. The parameters of the deep Boltzmann machine model are further fine-tuned for emotion recognition after pre-training of feature learning. Comparative experimental results on the NVIE database demonstrate that our approach outperforms other approaches using temperature statistic features or hand-crafted features borrowed from visible domain. The learned features from the forehead, eye, and mouth are more effective for discriminating valence dimension of emotion than other facial areas. In addition, our study shows that adding unlabeled data from other database during training can also improve feature learning performance.

Keywords

emotion recognition / thermal infrared images / deep Boltzmann machine

Cite this article

Download citation ▾
Shangfei WANG, Menghua HE, Zhen GAO, Shan HE, Qiang JI. Emotion recognition from thermal infrared images using deep Boltzmann machine. Front. Comput. Sci., 2014, 8(4): 609‒618 https://doi.org/10.1007/s11704-014-3295-3

References

[1]
Zeng Z, Pantic M, Roisman G, Huang T. A survey of affect recognition methods: audio, visual, and spontaneous expressions. IEEE Transactions on Pattern Analysis and Machine Intellignece, 2009, 31(1): 39-58
CrossRef Google scholar
[2]
Jarlier S, Grandjean D, Delplanque S, N’Diaye K, Cayeux I, Velazco M, Sander D, Vuilleumier P, Scherer K. Thermal analysis of facial muscles contractions. IEEE Transactions on Affective Computing, 2011, 2(1): 2-9
CrossRef Google scholar
[3]
Ranzato M, Susskind J, Mnih V, Hinton G. On deep generative models with applications to recognition. In: Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. 2011, 2857-2864
[4]
Wang S, Liu Z, Lv S, Lv Y, Wu G, Peng P, Chen F, Wang X. A natural visible and infrared facial expression database for expression recognition and emotion inference. IEEE Transactions on Multimedia, 2010, 682-691
CrossRef Google scholar
[5]
Khan MM, Ward R D, Ingleby M. Automated classification and recognition of facial expressions using infrared thermal imaging. In: Proceedings of the 2004 IEEE Conference on Cybernetics and Intelligent Systems. 2004, 202-206
[6]
Oz I, Khan M M. Efficacy of biophysiological measurements at ftfps for facial expression classification: a validation. In: Proceedings of 2012 IEEE-EMBS International Conference on Biomedical and Health Informatics. 2012, 108-111
CrossRef Google scholar
[7]
Trujillo L, Olague G, Hammoud R, Hernandez B. Automatic feature localization in thermal images for facial expression recognition. In: Proceedings of the 2005 IEEE Conference on Computer Vision and Pattern Recognition. 2005, 14
CrossRef Google scholar
[8]
Jiang G, Song X, Zheng F, Wang P, Omer A M. Facial expression recognition using thermal image. In: Proceedings of the 27th Annual International Conference of the Engineering in Medicine and Biology Society. 2005, 631-633
[9]
Hern�ndez B, Olague G, Hammoud R, Trujillo L, Romero E. Visual learning of texture descriptors for facial expression recognition in thermal imagery. Computer Vision and Image Understanding. 2007
CrossRef Google scholar
[10]
Yoshitomi Y. Facial expression recognition for speaker using thermal image processing and speech recognition system. In: Proceedings of the 10th WSEAS International Conference on Applied Computer Science. 2010, 182-186
[11]
Pavlidis I, Levine J. Thermal image analysis for polygraph testing. IEEE Engineering in Medicine and Biology Magazine, 2002, 21(6): 56-64
CrossRef Google scholar
[12]
Jain U, Tan B, Li Q. Concealed knowledge identification using facial thermal imaging. In: Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing. 2012, 1677-1680
[13]
Liu Z, Wang S. Emotion recognition using hidden Markov models from facial temperature sequence. Lecture Notes in Computer Science, 2011, 6975: 240-247
CrossRef Google scholar
[14]
Eom J S, Sohn J H. Emotion recognition using facial thermal images. Journal of the Ergonomics Society of Korea, 2012, 31(3): 427-435
CrossRef Google scholar
[15]
Bernhard Anzengruber A R. “Facelight”: potentials and drawbacks of thermal imaging to infer driver stress. In: Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 2012, 209-216
[16]
Krzywicki A T, He G, O’Kane B L. Analysis of facial thermal variations in response to emotion: eliciting film clips. In: Proceedings of SPIE Defense, Security, and Sensing. 2009, 7343: 73412-1-73412-11
[17]
Merla A, Romani G L. Thermal signatures of emotional arousal: a functional infrared imaging study. In: Proceedings of the 29th Annual International Conference of the Engineering in Medicine and Biology Society. 2007, 247-249
[18]
Nhan B R, Chau T. Classifying affective states using thermal infrared imaging of the human face. IEEE Transactions on Biomedical Engineering, 2010, 57(4): 979-987
CrossRef Google scholar
[19]
Ranzato M, Mnih V, Susskind J, Hinton G. Modeling natural images using gated MRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013
CrossRef Google scholar
[20]
Susskind J M, Hinton G E, Movellan J R, Anderson A K. Generating facial expressions with deep belief nets. Affective Computing, Emotion Modelling, Synthesis and Recognition, 2008, 421-440
[21]
Susskind JM. Interpreting faces with neurally inspired generative models. PhD thesis, Department of Psychology, University of Toronto, 2011
[22]
Rifai S, Bengio Y, Courville A, Vincent P, irza M. Disentangling factors of variation for facial expression recognition. In: Proceedings of 12th European Conference on Computer Vision, 2012, V6: 808-822
[23]
Sabzevari M, Toosizadeh S, Quchani S, Abrishami V. A fast and accurate facial expression synthesis system for color face images using face graph and deep belief network. In: Proceedings of the 2010 International Coference on Electronics and Information Engineering. 2010, V2: 354-358
CrossRef Google scholar
[24]
He S, Wang S, Lan W, Fu H, Ji Q. Facial expression recognition using deep Boltzmann machine from thermal infrared images. In: Proceedings of the 2013 Humaine Association Conference on Affective computing and Intelligent Interaction. 2013, 239-244
CrossRef Google scholar
[25]
Kim Y, Lee H, Provost E M. Deep learning for robust feature generation in audio-visual emotion recognition. In: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. 2013
CrossRef Google scholar
[26]
Stuhlsatz A, Meyer C, Eyben F, ZieIke T, Meier G, Schuller B. Deep neural networks for acoustic emotion recognition: raising the benchmarks. In: Proceedings of the 2011 IEEE International Conference on acoustics, Speech and Signal Processing. 2011, 5688-5691
[27]
Schmidt E M, Scott J J, Kim Y E. Feature learning in dynamic environments: modeling the acoustic structure of musical emotion. In: Proceedings of the International Society for Music Information Retrieval. 2012, 325-330
[28]
Martinez H, Bengio Y, Yannakakis G. Learning deep physiological models of affect. IEEE Computational Intelligence Magazine, 2013, 8(2): 20-33
CrossRef Google scholar
[29]
Salakhutdinov R, Hinton G E. Deep Boltzmann machines. In: Proceedings of the 2009 International Conference on Artificial Intelligence and Statistics. 2009, 448-455
[30]
Mohamed A, Dahl G E, Hinton G. Acoustic modeling using deep belief networks. IEEE Transactions on Audio, Speech, and Language Processing. 2012
CrossRef Google scholar
[31]
Hinton G E. Training products of experts by minimizing contrastive divergence. Neural Computation, 2002, 14(8): 1771-1800
CrossRef Google scholar
[32]
Petridis S, Martinez B, Pantic M. The mahnob laughter database. Image and Vision Computing Journal, 2013, 31(2): 186-202
CrossRef Google scholar
[33]
Selinger A, Socolinsky D A. Appearance-based facial recognition using visible and thermal imagery: a comparative study. Technical report, DTIC Document, 2006
[34]
Otsu N. A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man and Cybermetics, 1979, 9(1): 62-66
CrossRef Google scholar
[35]
He X, Yan S, Hu Y, Niyogi P, Zhang H J. Face recognition using laplacianfaces. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005, 27(3): 328-340
CrossRef Google scholar
[36]
Wright J, Yang A, Ganesh A, Sastry S, Ma Y. Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2009, 31(2): 210-227
CrossRef Google scholar

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(527 KB)

Accesses

Citations

Detail

Sections
Recommended

/