TF-MEET: A Transferable Fusion Multi-Band Transformer for Cross-Session EEG Decoding
Qilong Yuan , Enze Shi , Di Zhu , Xiaoshan Zhang , Kui Zhao , Dingwen Zhang , Tianming Liu , Shu Zhang
CAAI Transactions on Intelligence Technology ›› 2025, Vol. 10 ›› Issue (6) : 1799 -1812.
TF-MEET: A Transferable Fusion Multi-Band Transformer for Cross-Session EEG Decoding
Electroencephalography (EEG) is a widely used neuroimaging technique for decoding brain states. Transformer is gaining attention in EEG signal decoding due to its powerful ability to capture global features. However, relying solely on a single feature extracted by the traditional transformer model to address the domain shift problem caused by the time variability and complexity of EEG signals is challenging. In this paper, we propose a novel Transferable Fusion Multi-band EEG Transformer (TF-MEET) to enhance the performance of cross-session decoding of EEG signals. TF-MEET is mainly divided into three parts: (1) transform the EEG signals into spatial images and band images; (2) design an encoder to obtain spatial features and band features for the two types of images, and comprehensive fusion features are obtained through the weight adaptive fusion module; (3) cross-session EEG signals decoding is achieved by aligning the joint distribution of different domain features and categories through multi-loss domain adversarial training. Experimental results demonstrate (1) TF-MEET outperforms other advanced transfer learning methods on two public EEG emotion recognition datasets, SEED and SEED_IV, achieving an ac-curacy of 91.68% on SEED and 76.21% on SEED_IV; (2) TF-MEET proves the effectiveness of the transferable fusion module; (3) TF-MEET can identify explainable activation areas in the brain. We demonstrate that TF-MEET can capture comprehensive, transferable and interpretable features in EEG signals and perform well in cross-session EEG signals decoding, which can promote the development of brain-computer interface system.
deep neural networks / electroencephalography / medical image processing / medical signal processing
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
J.-P. |
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
W.-L. |
| [18] |
W.-L. |
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
C.-X. |
| [33] |
|
| [34] |
|
| [35] |
|
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
B. Sun and K. Saenko, “Deep Coral:Correlation Alignment for Deep Domain Adaptation,” in Computer Vision-ECCV 2016 Workshops: Amsterdam, the Netherlands, October 8-10 and 15-16, 2016 Proceedings, Part III, Vol. 14 (Springer International Publishing, 2016), 443-450. |
| [44] |
|
| [45] |
|
| [46] |
|
| [47] |
|
| [48] |
|
| [49] |
L. Van der Maaten and G. Hinton, “Visualizing Data Using t-SNE,” Journal of Machine Learning Research 9 (2008), https://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf. |
| [50] |
|
| [51] |
|
| [52] |
|
| [53] |
|
/
| 〈 |
|
〉 |