TF-MEET: A Transferable Fusion Multi-Band Transformer for Cross-Session EEG Decoding

Qilong Yuan , Enze Shi , Di Zhu , Xiaoshan Zhang , Kui Zhao , Dingwen Zhang , Tianming Liu , Shu Zhang

CAAI Transactions on Intelligence Technology ›› 2025, Vol. 10 ›› Issue (6) : 1799 -1812.

PDF (2384KB)
CAAI Transactions on Intelligence Technology ›› 2025, Vol. 10 ›› Issue (6) :1799 -1812. DOI: 10.1049/cit2.70056
ORIGINAL RESEARCH
research-article

TF-MEET: A Transferable Fusion Multi-Band Transformer for Cross-Session EEG Decoding

Author information +
History +
PDF (2384KB)

Abstract

Electroencephalography (EEG) is a widely used neuroimaging technique for decoding brain states. Transformer is gaining attention in EEG signal decoding due to its powerful ability to capture global features. However, relying solely on a single feature extracted by the traditional transformer model to address the domain shift problem caused by the time variability and complexity of EEG signals is challenging. In this paper, we propose a novel Transferable Fusion Multi-band EEG Transformer (TF-MEET) to enhance the performance of cross-session decoding of EEG signals. TF-MEET is mainly divided into three parts: (1) transform the EEG signals into spatial images and band images; (2) design an encoder to obtain spatial features and band features for the two types of images, and comprehensive fusion features are obtained through the weight adaptive fusion module; (3) cross-session EEG signals decoding is achieved by aligning the joint distribution of different domain features and categories through multi-loss domain adversarial training. Experimental results demonstrate (1) TF-MEET outperforms other advanced transfer learning methods on two public EEG emotion recognition datasets, SEED and SEED_IV, achieving an ac-curacy of 91.68% on SEED and 76.21% on SEED_IV; (2) TF-MEET proves the effectiveness of the transferable fusion module; (3) TF-MEET can identify explainable activation areas in the brain. We demonstrate that TF-MEET can capture comprehensive, transferable and interpretable features in EEG signals and perform well in cross-session EEG signals decoding, which can promote the development of brain-computer interface system.

Keywords

deep neural networks / electroencephalography / medical image processing / medical signal processing

Cite this article

Download citation ▾
Qilong Yuan, Enze Shi, Di Zhu, Xiaoshan Zhang, Kui Zhao, Dingwen Zhang, Tianming Liu, Shu Zhang. TF-MEET: A Transferable Fusion Multi-Band Transformer for Cross-Session EEG Decoding. CAAI Transactions on Intelligence Technology, 2025, 10(6): 1799-1812 DOI:10.1049/cit2.70056

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

P. Lahane, J. Jagtap, A. Inamdar, N. Karne, and R. Dev, “A Review of Recent Trends in EEG Based Brain-Computer Interface,” in 2019 Interna-tional Conference on Computational Intelligence in Data Science (ICCIDS) IEEE, 2019), 1-6, https://doi.org/10.1109/ICCIDS.2019.8862054.

[2]

F. Lotte, L. Bougrain, A. Cichocki, et al., “A Review of Classification Algorithms for EEG-Based Brain-Computer Interfaces: A 10 Year Up-date,” Journal of Neural Engineering 15, no. 3 (2018): 031005, https://doi.org/10.1088/1741-2552/aab2f2.

[3]

E. Niedermeyer and F. L. da Silva, Electroencephalography: Basic Principles, Clinical Applications, and Related Fields (Lippincott Wil-liams & Wilkins, 2005).

[4]

E. Başar, “Brain Oscillations in Neuropsychiatric Disease,” Dialogues in Clinical Neuroscience 15, no. 3 (2013): 291-300, https://doi.org/10.31887/dcns.2013.15.3/ebasar.

[5]

S. Zhang, E. Shi, L. Wu, et al., “Differentiating Brain States via Multi-Clip Random Fragment Strategy-Based Interactive Bidirectional Recurrent Neural Network,” Neural Networks 165 (2023): 1035-1049, https://doi.org/10.1016/j.neunet.2023.06.040.

[6]

M. X. Cohen, Analyzing Neural Time Series Data: Theory and Practice (MIT Press, 2014).

[7]

H. Chao, L. Dong, Y. Liu, and B. Lu, “Emotion Recognition From Multiband EEG Signals Using CapsNet,” Sensors 19, no. 9 (2019): 2212, https://doi.org/10.3390/s19092212.

[8]

B. Lechat, K. L. Hansen, Y. A. Melaku, et al., “A Novel Electroencephalogram-Derived Measure of Disrupted Delta Wave Ac-tivity During Sleep Predicts All-Cause Mortality Risk,” Annals of the American Thoracic Society 19, no. 4 (2022): 649-658, https://doi.org/10.1513/annalsats.202103-315oc.

[9]

C. S. Musaeus, K. Engedal, P. Høgh, et al., “EEG Theta Power Is an Early Marker of Cognitive Decline in Dementia Due To Alzheimer’s Disease,” Journal of Alzheimer's Disease 64, no. 4 (2018): 1359-1371, https://doi.org/10.3233/JAD-180300.

[10]

O. Bazanova and D. Vernon, “Interpreting EEG Alpha Activity,” Neuroscience & Biobehavioral Reviews 44 (2014): 94-110, https://doi.org/10.1016/j.neubiorev.2013.05.007.

[11]

J.-P. Lachaux N. Axmacher F. Mormann E. Halgren and N. E. Crone, “High-Frequency Neural Activity and Human Cognition: Past, Present and Possible Future of Intracranial EEG Research,” Progress in Neurobiology 98, no. 3 (2012): 279-301, https://doi.org/10.1016/j.pneurobio.2012.06.008.

[12]

D. Böttger, C. S. Herrmann, and D. Y. von Cramon, “Amplitude Differences of Evoked Alpha and Gamma Oscillations in Two Different Age Groups,” International Journal of Psychophysiology 45, no. 3 (2002): 245-251, https://doi.org/10.1016/s0167-8760(02)00031-4.

[13]

H. Altaheri, G. Muhammad, and M. Alsulaiman, “Physics-Informed Attention Temporal Convolutional Network for EEG-based Motor Im-agery Classification,” IEEE Transactions on Industrial Informatics 19, no. 2 (2022): 2249-2258, https://doi.org/10.1109/tii.2022.3197419.

[14]

H. Hu, C. Yue, E. Shi, et al., “Effective Human Motor Imagery Recognition via Segment Pool Based on One-Dimensional Convolu-tional Neural Network With Bidirectional Recurrent Attention Unit Network,” Applied Sciences 13, no. 16 (2023): 9233, https://doi.org/10.3390/app13169233.

[15]

X. Zhang, Z. Miao, C. Menon, Y. Zheng, M. Zhao, and D. Ming, “Priming Cross-Session Motor Imagery Classification With a Universal Deep Domain Adaptation Framework,” Neurocomputing 556 (2023): 126659, https://doi.org/10.1016/j.neucom.2023.126659.

[16]

Y. Song, Q. Zheng, B. Liu, and X. Gao, “EEG Conformer: Con-volutional Transformer for EEG Decoding and Visualization,” IEEE Transactions on Neural Systems and Rehabilitation Engineering 31 (2022): 710-719, https://doi.org/10.1109/tnsre.2022.3230250.

[17]

W.-L. Zheng and B. -L. Lu, “Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition With Deep Neural Networks,” IEEE Transactions on Autonomous Mental Development 7 (2015): 162-175, https://doi.org/10.1109/TAMD.2015.2431497.

[18]

W.-L. Zheng W. Liu Y. Lu B.-L. Lu and A. Cichocki, “Emotion-Meter: A Multimodal Framework for Recognizing Human Emotions,” IEEE Transactions on Cybernetics 49, no. 3 (2018): 1110-1122, https://doi.org/10.1109/tcyb.2018.2797176.

[19]

A. Vaswani, N. Shazeer, N. Parmar, et al., “Attention Is All You Need,” Advances in Neural Information Processing Systems 30 (2017), https://doi.org/10.48550/arXiv.1706.03762.

[20]

Y. Song, X. Jia,L. Yang, and L. Xie, “Transformer-Based Spatial-Temporal Feature Learning for EEG Decoding,” ArXiv Prepr. ArXiv210611170 (2021), https://doi.org/10.48550/arXiv.2106.11170.

[21]

A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al., “An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale,” ArXiv Prepr. ArXiv201011929 (2020), https://doi.org/10.48550/arXiv.2010.11929.

[22]

Y. Ding, Y. Li, H. Sun, et al., “EEG-Deformer: A Dense Convolu-tional Transformer for Brain-Computer Interfaces,” IEEE Journal of Biomedical and Health Informatics 29, no. 3 (2024): 1909-1918, https://doi.org/10.1109/jbhi.2024.3504604.

[23]

W. Zhao, X. Jiang, B. Zhang, S. Xiao, and S. Weng, “CTNet: A Convolutional Transformer Network for EEG-Based Motor Imagery Classification,” Scientific Reports 14, no. 1 (2024): 20237, https://doi.org/10.1038/s41598-024-71118-7.

[24]

R. Liu, Y. Chao, X. Ma, et al., “ERTNet: An Interpretable Transformer-Based Framework for EEG Emotion Recognition,” Fron-tiers in Neuroscience 18 (2024): 1320645, https://doi.org/10.3389/fnins.2024.1320645.

[25]

A. Hamidi and K. Kiani, “Motor Imagery EEG Signals Classification Using a Transformer-GCN Approach,” Applied Soft Computing 170 (2025): 112686, https://doi.org/10.1016/j.asoc.2024.112686.

[26]

E. Shi, S. Yu, Y. Kang, et al., “MEET: A Multi-Band EEG Trans-former for Brain States Decoding,” IEEE Transactions on Biomedical Engineering 71, no. 5 (2024): 1442-1453, https://doi.org/10.1109/tbme.2023.3339892.

[27]

W. Zellinger, T. Grubinger, E. Lughofer,T. Natschläger, and S. Saminger-Platz, “Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning,” ArXiv Prepr. ArXiv170208811 (2017), https://doi.org/10.48550/arXiv.1702.08811.

[28]

J. Li, Z. Ye, J. Gao, Z. Meng, K. Tong, and S. Yu, “Fault Transfer Diagnosis of Rolling Bearings Across Different Devices via Multi-Domain Information Fusion and Multi-Kernel Maximum Mean Discrepancy,” Applied Soft Computing 159 (2024): 111620, https://doi.org/10.1016/j.asoc.2024.111620.

[29]

Y. Ganin, E. Ustinova, H. Ajakan, et al., “Domain-Adversarial Training of Neural Networks,” Journal of Machine Learning Research 17 (2016): 1-35, https://www.jmlr.org/papers/v17/15-239.html.

[30]

J. Hu, W. Li, Y. Zhang, and Z. Tian, “Cross-Domain few-shot Fault Diagnosis Based on Meta-Learning and Domain Adversarial Graph Con-volutional Network,” Engineering Applications of Artificial Intelligence 136 (2024): 108970, https://doi.org/10.1016/j.engappai.2024.108970.

[31]

T. Sun, C. Lu,T. Zhang, and H. Ling, “Safe Self-Refinement for Transformer-Based Domain Adaptation,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2022), 7191-7200.

[32]

C.-X. Ren Y. Zhai Y.-W. Luo and H. Yan, “Towards Unsupervised Domain Adaptation via Domain-Transformer,” International Journal of Computer Vision 132, no. 12 (2024): 6163-6183, https://doi.org/10.1007/s11263-024-02174-9.

[33]

J. Yang, J. Liu,N. Xu, and J. Huang, “TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (IEEE, 2023), 520-530.

[34]

P. Wang, H. Zheng, S. Dai, et al., “A Survey of Spatio-Temporal EEG Data Analysis: From Models to Applications,” ArXiv Prepr. ArXiv241008224 (2024), https://doi.org/10.48550/arXiv.2410.08224.

[35]

X. Yan and A. bin Abas, “Advancements and Perspectives in Fatigue Driving Detection: A Comprehensive Review,” ICCK Transactions on Intelligent Unmanned Systems 1 (2024): 4-15, https://doi.org/10.62762/tius.2024.767724.

[36]

F. Xiao, J. Wen, W. Pedrycz, and M. Aritsugi, “Complex Evidence Theory for Multisource Data Fusion,” Chinese Journal of Information Fusion 1, no. 2 (2024): 134-159, https://doi.org/10.62762/cjif.2024.999646.

[37]

J. P. Snyder,Map projections-A Working Manual (US Government Printing Office, 1987).

[38]

P. Alfeld, “A Trivariate Clough—Tocher Scheme for Tetrahedral Data,” Computer Aided Geometric Design 1, no. 2 (1984): 169-181, https://doi.org/10.1016/0167-8396(84)90029-3.

[39]

M. Long, Z. Cao, J. Wang, and M. I. Jordan, “Conditional Adver-sarial Domain Adaptation,” Advances in Neural Information Processing Systems 31 (2018), https://doi.org/10.48550/arXiv.1705.10667.

[40]

X. Chen, S. Wang,M. Long, and J. Wang, “Transferability vs. Dis-criminability: Batch Spectral Penalization for Adversarial Domain Adaptation,” in Proceedings of the International Conference on Machine Learning (ICML) (PMLR, 2019), 1081-1090.

[41]

E. Tzeng, J. Hoffman, N. Zhang,K. Saenko, and T. Darrell, “Deep Domain Confusion: Maximizing for Domain Invariance,” ArXiv Prepr. ArXiv14123474 (2014), https://doi.org/10.48550/arXiv.1412.3474.

[42]

H. Li, Y.-M. Jin, W.-L. Zheng, and B.-L. Lu, “Cross-Subject Emotion Recognition Using Deep Adaptation Networks,” in Neural Information Processing:25th International Conference, ICONIP 2018, Siem Reap, Cambodia, December 13-16, 2018. Proceedings, Part V, Vol. 25 (Springer International Publishing, 2018), 403-413.

[43]

B. Sun and K. Saenko, “Deep Coral:Correlation Alignment for Deep Domain Adaptation,” in Computer Vision-ECCV 2016 Workshops: Amsterdam, the Netherlands, October 8-10 and 15-16, 2016 Proceedings, Part III, Vol. 14 (Springer International Publishing, 2016), 443-450.

[44]

H. Chen, M. Jin, Z. Li, C. Fan, J. Li, and H. He, “MS-MDA: Multisource Marginal Distribution Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition,” Frontiers in Neuroscience 15 (2021): 778488, https://doi.org/10.3389/fnins.2021.778488.

[45]

X. Du, C. Ma, G. Zhang, et al., “An Efficient LSTM Network for Emotion Recognition From Multichannel EEG Signals,” IEEE Trans-actions on Affective Computing 13, no. 3 (2022): 1528-1540, https://doi.org/10.1109/taffc.2020.3013711.

[46]

Q. She, C. Zhang, F. Fang, Y. Ma, and Y. Zhang, “Multisource Associate Domain Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition,” IEEE Transactions on Instrumentation and Measurement 72 (2023): 1-12, https://doi.org/10.1109/tim.2023.3277985.

[47]

K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition CVPR, 2016), 770-778, https://doi.org/10.1109/CVPR.2016.90.

[48]

R. Dey and F. M. Salem, “Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks,” in 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS) IEEE, 2017), 1597-1600.

[49]

L. Van der Maaten and G. Hinton, “Visualizing Data Using t-SNE,” Journal of Machine Learning Research 9 (2008), https://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf.

[50]

P. Sarkheil, R. Goebel, F. Schneider, and K. Mathiak, “Emotion Unfolded by Motion: A Role for Parietal Lobe in Decoding Dynamic Facial Expressions,” Social Cognitive and Affective Neuroscience 8 (2013): 950-957, https://doi.org/10.1093/scan/nss092.

[51]

Y. Chen, X. Xu, and X. Qin, “Cross-Subject and Cross-Session EEG Emotion Recognition Based on Multi-Source Structural Deep Clus-tering,” IEEE Transactions on Cognitive and Developmental Systems 17, no. 5 (2025): 1245-1259, https://doi.org/10.1109/tcds.2025.3545666.

[52]

H. Chen, J. Li, H. He, et al., “VAE-CapsNet: A Common Emotion Information Extractor for Cross-Subject Emotion Recognition,” Knowledge-Based Systems 311 (2025): 113018, https://doi.org/10.1016/j.knosys.2025.113018.

[53]

G. Luo, Y. Han, W. Xie, et al., “GCD-JFSE: Graph-Based Class-Domain Knowledge Joint Feature Selection and Ensemble Learning for EEG-Based Emotion Recognition,” Knowledge-Based Systems 309 (2025): 112770, https://doi.org/10.1016/j.knosys.2024.112770.

AI Summary AI Mindmap
PDF (2384KB)

29

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/