Secformer: Privacy-preserving atomic-level componentized transformer-like model with MPC✩
Chi Zhang , Tao Shen , Fenhua Bai , Kai Zeng , Xiaohui Zhang , Bin Cao
›› 2026, Vol. 12 ›› Issue (1) : 86 -100.
Secformer: Privacy-preserving atomic-level componentized transformer-like model with MPC✩
The global surge in Artificial Intelligence (AI) has been triggered by the impressive performance of deep-learning models based on the Transformer architecture. However, the efficacy of such models is increasingly dependent on the volume and quality of data. Data are often distributed across institutions and companies, making cross-organizational data transfer vulnerable to privacy breaches and subject to privacy laws and trade secret regulations. These privacy and security concerns continue to pose major challenges to collaborative training and inference in multi-source data environments. These challenges are particularly significant for Transformer models, where the complex internal encryption computations drastically reduce computational efficiency, ultimately threatening the model’s practical applicability. We hence introduce Secformer, an innovative architecture specifically designed to protect the privacy of Transformer-like models. Secformer separates the encoder and decoder modules, enabling the decomposition of computation flows in Transformer-like models and their efficient mapping to Multi-Party Computation (MPC) protocols. This design effectively addresses privacy leakage issues during the collaborative computation process of Transformer models. To prevent performance degradation caused by encrypted attention modules, we propose a modular design strategy that optimizes high-level components by reconstructing low-level operators. We further analyze the security of Secformer’s core components, presenting security definitions and formal proofs. We construct a library of fundamental operators and core modules using atomic-level component designs as the basic building blocks for encoders and decoders. Moreover, these components can serve as foundational operators for other Transformer-like models. Extensive experimental evaluations demonstrate Secformer’s excellent performance while preserving privacy and offering universal adaptability for Transformer-like models.
Privacy-preserving computation / Deep learning / Multi-party computation / Data sharing
| [1] |
|
| [2] |
|
| [3] |
|
| [4] |
|
| [5] |
|
| [6] |
|
| [7] |
|
| [8] |
|
| [9] |
|
| [10] |
|
| [11] |
|
| [12] |
|
| [13] |
|
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
|
| [18] |
|
| [19] |
|
| [20] |
|
| [21] |
|
| [22] |
|
| [23] |
|
| [24] |
|
| [25] |
|
| [26] |
|
| [27] |
|
| [28] |
|
| [29] |
|
| [30] |
|
| [31] |
|
| [32] |
|
| [33] |
|
| [34] |
|
| [35] |
OpenAI,Gpt-4 technical report, https://cdn.openai.com/papers/gpt-4.pdf, 2023. (Accessed 15 March 2025). |
| [36] |
|
| [37] |
|
| [38] |
|
| [39] |
|
| [40] |
|
| [41] |
|
| [42] |
|
| [43] |
|
/
| 〈 |
|
〉 |