Mining Relation-aware Local Representations with Adapter for Fair Facial Expression Recognition
Jinglin ZHANG , Jing LI , Qiangchang WANG , Yanbo WANG , Shiyong WANG , Xinxin ZHANG , Yilong YIN
Facial expression datasets often suffer from class imbalance, leading to unfair prediction across expression classes. Previous Facial Expression Recognition (FER) methods based on deep learning typically treat Convolutional Neural Networks (CNNs) as black boxes and focus on the final layer to extract local features. However, they neglect the semantic co-occurrence features (the co-occurring relational features among distinct local elements in a facial image) within deep layers of CNNs, which can lead to overemphasizing inter-class similarity. Moreover, these approaches rely on biased performance evaluation metrics, such as overall or mean accuracy, potentially exacerbating unfairness in FER model predictions. To address these issues, we propose a novel approach called FairFER and introduce sample variance, combining overall and mean accuracies as evaluation metrics. Our FairFER mainly comprises Global Co-occurrence relation Adapter (GCA) and Landmark-Aided Focus (LAF) modules. Specifically, the GCA module is a residual adapter that employs a dual-path adaptive adjustment mechanism, enhancing the deeper layers of CNNs to extract semantic co-occurrence features, mitigating misclassification caused by excessive focus on inter-class similarity. The LAF module leverages landmark information with our proposed top k class activation mapping consistency and balanced classification constraints to focus on critical semantic co-occurrence features. Extensive experiments confirm that FairFER achieves fair prediction across expression classes and state-of-the-art performance on three imbalanced FER benchmark datasets, RAF-DB, FERPlus and AffectNet.
Facial expression recognition / Fair prediction across expression classes / Class imbalance / Global co-occurrence relation adapter / Landmark-aided focus
Higher Education Press 2026
/
| 〈 |
|
〉 |