Energy efficient spike transformer accelerator at the edge

Congpeng Du , Qi Wen , Zhiqiang Wei , Hao Zhang

Intelligent Marine Technology and Systems ›› 2024, Vol. 2 ›› Issue (1)

PDF
Intelligent Marine Technology and Systems ›› 2024, Vol. 2 ›› Issue (1) DOI: 10.1007/s44295-024-00040-5
Research Paper

Energy efficient spike transformer accelerator at the edge

Author information +
History +
PDF

Abstract

Large language models are widely used across various applications owing to their superior performance. However, their high computational cost makes deployment on edge devices challenging. Spiking neural networks (SNNs), with their power-efficient, event-driven binary operations, offer a promising alternative. Combining SNNs and transformers is expected to be an effective solution for edge computing. This study proposes an energy-efficient spike transformer accelerator, which is the base component of the large language models, for edge computing, combining the efficiency of SNNs with the performance of transformer models. The design achieves performance levels comparable to traditional transformers while maintaining the lower power consumption characteristic of SNNs. To enhance hardware efficiency, a specialized computation engine and novel datapath for the spike transformer are introduced. The proposed design is implemented on the Xilinx Zynq UltraScale+ ZCU102 device, demonstrating significant improvements in energy consumption over previous transformer accelerators. It even surpasses some recent binary transformer accelerators in efficiency. Implementation results confirm that the proposed spike transformer accelerator is a feasible solution for running transformer models on edge devices.

Keywords

Transformer accelerator / Spiking neural network / Spike transformer / FPGA / Edge computing

Cite this article

Download citation ▾
Congpeng Du, Qi Wen, Zhiqiang Wei, Hao Zhang. Energy efficient spike transformer accelerator at the edge. Intelligent Marine Technology and Systems, 2024, 2(1): DOI:10.1007/s44295-024-00040-5

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Bai HL, Zhang W, Hou L, Shane LF, Jin J, Jiang X et al (2021) BinaryBERT: pushing the limit of BERT quantization. Preprint at arXiv:2012.15701

[2]

Bi Z, Zhang NY, Xue YD, Ou YX, Ji DX, Zheng GZ et al (2024) OceanGPT: a large language model for ocean science tasks. Preprint at arXiv:2310.02031

[3]

Brown TB, Mann B, Ryder N, Subbiah M, Kaplan J, Dhariwal P et al (2020) Language models are few-shot learners. Preprint at arXiv:2005.14165

[4]

Chen QC, Cai CD, Chen YR, Zhou X, Zhang D, Peng Y. TemproNet: a transformer-based deep learning model for seawater temperature prediction. Ocean Eng, 2024, 293: 116651,

[5]

Chen TL, Cheng Y, Gan Z, Yuan L, Zhang L, Wang ZY (2021) Chasing sparsity in vision transformers: an end-to-end exploration. Preprint at arXiv:2106.04533

[6]

Deng C, Zhang TH, He ZM, Chen QY, Shi YY, Xu Y et al (2024) K2: a foundation language model for geoscience knowledge understanding and utilization. In: Proceedings of the 17th ACM International Conference on Web Search and Data Mining, Merida, pp 161–170

[7]

Devlin J, Chang MW, Lee K, Toutanova K (2019) BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, pp 4171–4186

[8]

Du CP, Ko SB, Zhang H (2024) Energy efficient FPGA-based binary transformer accelerator for edge devices. In: 2024 IEEE International Symposium on Circuits and Systems (ISCAS), Singapore, pp 1–5

[9]

Fang C, Zhou AJ, Wang ZF (2022) An algorithm–hardware co-optimized framework for accelerating N:M sparse transformers. IEEE Trans Very Large Scale Integr (VLSI) Syst 30(11):1573–1586

[10]

Gerstner W, Kistler WM. . Spiking neuron models: single neurons, populations, plasticity, 2002 Cambridge Cambridge University Press,

[11]

Izhikevich EM. Simple model of spiking neurons. IEEE Trans Neural Netw, 2003, 14(6): 1569-1572,

[12]

Ji YH, Fang C, Wang ZF (2024) Beta: binarized energy-efficient transformer accelerator at the edge. In: 2024 IEEE International Symposium on Circuits and Systems (ISCAS), Singapore, pp 1–5

[13]

Le PHC, Li XL (2023) BinaryViT: pushing binary vision transformers towards convolutional models. Preprint at arXiv:2306.16678

[14]

Li BB, Pandey S, Fang HW, Lyv YJ, Li J, Chen JY et al (2020) FTRANS: energy-efficient acceleration of transformers using FPGA. Preprint at arXiv:2007.08563

[15]

Lin J, Zhu LG, Chen WM, Wang WC, Han S. Tiny machine learning: progress and futures. IEEE Circuits Syst Mag, 2023, 23(3): 8-34,

[16]

Liu YH, Ott M, Goyal N, Du JF, Joshi M, Chen DQ et al (2019) RoBERTa: a robustly optimized BERT pretraining approach. Preprint at arXiv:1907.11692

[17]

Liu ZC, Oguz B, Pappu A, Xiao L, Yih S, Li M et al (2022) BiT: robustly binarized multi-distilled transformer. Preprint at arXiv:2205.13016

[18]

Lu SY, Wang MQ, Liang S, Lin J, Wang ZF (2020) Hardware accelerator for multi-head attention and position-wise feed-forward in the transformer. In: 2020 IEEE 33rd International System-on-Chip Conference (SOCC), Las Vegas, pp 84–89

[19]

Maass W. Networks of spiking neurons: the third generation of neural network models. Neural Netw, 1997, 10(9): 1659-1671,

[20]

Moor M, Banerjee O, Abad ZSH, Krumholz HM, Leskovec J, Topol EJ, et al.. Foundation models for generalist medical artificial intelligence. Nature, 2023, 616(7956): 259-265,

[21]

Qin HT, Ding YF, Zhang MY, Yan QH, Liu AS, Dang QQ et al (2022) BiBERT: accurate fully binarized BERT. Preprint at arXiv:2203.06390

[22]

Roy K, Jaiswal A, Panda P. Towards spike-based machine intelligence with neuromorphic computing. Nature, 2019, 575(7784): 607-617,

[23]

Schuman CD, Kulkarni SR, Parsa M, Mitchell JP, Date P, Kay B. Opportunities for neuromorphic computing algorithms and applications. Nat Comput Sci, 2022, 2(1): 10-19,

[24]

Shi WS, Cao J, Zhang Q, Li YHZ, Xu LY. Edge computing: vision and challenges. IEEE Internet Things J, 2016, 3(5): 637-646,

[25]

Sun MS, Ma HY, Kang GL, Jiang YF, Chen TL, Ma XL et al (2022) VAQF: fully automatic software-hardware co-design framework for low-bit vision transformer. Preprint at arXiv:2201.06618

[26]

Sze V, Chen YH, Yang TJ, Emer JS. Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE, 2017, 105(12): 2295-2329,

[27]

Theodoris CV, Xiao L, Chopra A, Chaffin MD, Al Sayed ZR, Hill MC, et al.. Transfer learning enables predictions in network biology. Nature, 2023, 618: 616-624,

[28]

Touvron H, Cord M, Douze M, Massa F, Sablayrolles A, Jegou H (2021) Training data-efficient image transformers & distillation through attention. Preprint at arXiv:2012.12877

[29]

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN et al (2017) Attention is all you need. In: 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, pp 6000–6010

[30]

Wang HR, Zhang ZK, Han S (2021) SpAtten: efficient sparse attention architecture with cascade token and head pruning. In: 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA), Seoul, pp 97–110

[31]

Yao M, Hu JK, Zhou ZK, Yuan L, Tian YH, Xu B et al (2023a) Spike-driven transformer. Preprint at arXiv:2307.01694

[32]

Yao M, Zhao GS, Zhang HY, Hu YF, Deng L, Tian YH, et al.. Attention spiking neural networks. IEEE Trans Pattern Anal Mach Intell, 2023, 45(8): 9393-9410,

[33]

Zhang W, Hou L, Yin YC, Shang LF, Chen X, Jiang X et al (2020) TernaryBERT: distillation-aware ultra-low bit BERT. Preprint at arXiv:2009.12812

[34]

Zhou Z, Chen X, Li E, Zeng LK, Luo K, Zhang JS. Edge intelligence: paving the last mile of artificial intelligence with edge computing. Proc IEEE, 2019, 107(8): 1738-1762,

[35]

Zhou ZK, Zhu YS, He C, Wang YW, Yan SC, Tian YH et al (2023) Spikformer: when spiking neural network meets transformer. Preprint at arXiv:2209.15425

[36]

Zou SH, Mu YX, Zuo XX, Wang S, Cheng L (2023) Event-based human pose tracking by spiking spatiotemporal transformer. Preprint at arXiv:2303.09681

Funding

Natural Science Foundation of Shandong Province(ZR2023QF011)

Natural Science Foundation of Qingdao Municipality(23-2-1-114-zyyd-jch)

Department of Science and Technology of Shandong Province(YDZX2023061)

AI Summary AI Mindmap
PDF

546

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/