SFR-Net: sample-aware and feature refinement network for cross-domain micro-expression recognition

Jing Liu , Xinyu Ji , Mengmeng Wang

Optoelectronics Letters ›› 2023, Vol. 19 ›› Issue (7) : 437 -442.

PDF
Optoelectronics Letters ›› 2023, Vol. 19 ›› Issue (7) : 437 -442. DOI: 10.1007/s11801-023-3021-1
Article

SFR-Net: sample-aware and feature refinement network for cross-domain micro-expression recognition

Author information +
History +
PDF

Abstract

Over the past several decades, micro-expression recognition (MER) has become a growing concern for scientific community. As the filming conditions vary from database to database, previous single-domain MER methods generally exhibit severe performance drop when applied to another database. To deal with this pressing problem, in this paper, a sample-aware and feature refinement network (SFR-Net) is proposed, which combines domain adaptation with deep metric learning to extract intrinsic features of micro-expressions for accurate recognition. With the help of decoders, siamese networks increasingly refine shared features relevant to emotions while exclusive features irrelevant to emotions are gradually obtained by private networks. In order to achieve promising performance, we further design sample-aware loss to constrain the feature distribution in the high-dimensional feature space. Experimental results show the proposed algorithm can effectively mitigate the diversity among different micro-expression databases, and achieve better generalization performance compared with state-of-the-art methods.

Cite this article

Download citation ▾
Jing Liu, Xinyu Ji, Mengmeng Wang. SFR-Net: sample-aware and feature refinement network for cross-domain micro-expression recognition. Optoelectronics Letters, 2023, 19(7): 437-442 DOI:10.1007/s11801-023-3021-1

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

MAOQ R, ZHOUL, ZHENGW M, et al.. Objective class-based micro-expression recognition under partial occlusion via region-inspired relation reasoning network[J]. IEEE transactions on affective computing, 2022, 13(4):1998-2016

[2]

BENX Y, RENY, ZHANGJ P, et al.. Video-based facial micro-expression analysis: a survey of datasets, features and algorithms[J]. IEEE transactions on pattern analysis and machine intelligence, 2021, 44(9):5826-5846

[3]

SINGHALP, WALAMBER, RAMANNAS, et al.. Domain adaptation: challenges, methods, datasets, and applications[J]. IEEE access, 2023, 11: 6973-7020

[4]

ZHANGT, ZONGY, ZHENGW M, et al.. Cross-database micro-expression recognition: a benchmark[J]. IEEE transactions on knowledge and data engineering, 2022, 34(5):544-559

[5]

BOUSMALIS K, TRIGEORGIS G, SILBERMAN N, et al. Domain separation networks[C]//Proceedings of 2016 Advances in Neural Information Processing Systems, December 5–10, 2016, Barcelona, Spain. NIPS, 2016: 343–351.

[6]

TSAIJ C, CHIENJ T. Adversarial domain separation and adaptation[C]. 2017 IEEE International Workshop on Machine Learning for Signal Processing, September 25–28, 2017, Tokyo, Japan, 2017, New York, IEEE: 1-6

[7]

KNOESTER J, FRASINCAR F, TRUŞCǍ M M. Domain adversarial training for aspect-based sentiment analysis[C]//Web Information Systems Engineering 2022, November 1–3, 2022, Biarritz, France. WISE, 2022: 21–37.

[8]

ZHANGH, WUC R, ZHANGZ Y, et al.. ResNeSt: split-attention networks[C]. Proceedings of 2022 IEEE Conference on Computer Vision and Pattern Recognition Workshops, June 19–20, 2022, New Orleans, LA, USA, 2022, New York, IEEE: 2736-2746

[9]

WOOS, PARKJ, LEEJ Y, et al.. CBAM: convolutional block attention module[C]. Proceedings of 2018 European Conference on Computer Vision, September 8–14, 2018, Munich, Germany, 2018, Berlin, Heidelberg, Springer-Verlag: 3-19

[10]

XIES A, ZHENGZ B, CHENL, et al.. Learning semantic representations for unsupervised domain adaptation[C]. 2018 International Conference on Machine Learning, July 10–15, 2018, Stockholm, Sweden, 2018, San Diego, ICML: 5423-5432

[11]

YEEN L, ZULKIFLEYM A, SAPUTROA H, et al.. Apex frame spotting using attention networks for micro-expression recognition system[J]. Computers, materials and continua, 2022, 73(3):5331-5348

[12]

LIX B, PFISTERT, HUANGX H, et al.. A spontaneous micro-expression database: inducement, collection and baseline[C]. 2013 IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, April 22–26, 2013, Shanghai, China, 2013, New York, IEEE: 1-6

[13]

YANW J, LIX, WANGS J, et al.. CASME II: an improved spontaneous micro-expression database and the baseline evaluation[J]. PloS one, 2014, 9(1):e86041

[14]

ZHAOS, TANGH, LIUS, et al.. ME-PLAN a deep prototypical learning with local attention network for dynamic micro-expression recognition[J]. Neural networks, 2022, 153: 427-443

[15]

ZONG Y, HUANG X H, ZHENG W M, et al. Learning a target sample re-generator for cross-database micro-expression recognition[C]//Proceedings of 2017 ACM on Multimedia Conference, October 23–27, 2017, Mountain View, CA, USA. New York: ACM, 2017: 872–880.

[16]

ZONGY, ZHENGW M, HUANGX H, et al.. Domain regeneration for cross-database micro-expression recognition[J]. IEEE transactions on image processing, 2018, 27(5):2484-2498

[17]

LIL Y, ZHOUX Y, ZONGY, et al.. Unsupervised cross-database micro-expression recognition using target-adapted least-squares regression[J]. IEICE transactions on information and systems, 2019, 102(7):1417-1421

[18]

JIANGX X, ZONGY, ZHENGW M. Seeking salient facial regions for cross-database micro-expression recognition[C]. 2022 International Conference on Pattern Recognition (ICPR), August 21–25, 2022, Montreal, QC, Canada, 2022, New York, IEEE: 1019-1025

AI Summary AI Mindmap
PDF

172

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/