Challenges and opportunities for memristors in modern AI computing paradigms

Shu Wang , Jia Wei , Lena Du , Cong Wang , Guozhong Zhao

Front. Phys. ›› 2026, Vol. 21 ›› Issue (3) : 035401

PDF (3765KB)
Front. Phys. ›› 2026, Vol. 21 ›› Issue (3) : 035401 DOI: 10.15302/frontphys.2026.035401
VIEW & PERSPECTIVE

Challenges and opportunities for memristors in modern AI computing paradigms

Author information +
History +
PDF (3765KB)

Abstract

The emergence of transformer-based artificial intelligence (AI) models has made a great impact on modern AI computing paradigms, where the attention mechanisms in transformer models dynamically generate weights and require computations involving global parameters. These requirements pose unprecedented challenges for memristor performance in in-memory computing, which demands memristor arrays with exceptional endurance, latency, energy consumption, and device uniformity. This perspective focuses on the alignment between AI computational requirements and memristor specifications, highlighting recent breakthroughs in materials, mechanisms, and applications for next-generation in-memory computing. Through this comprehensive analysis, our perspective provides critical insights into advancing memristor-based computing research toward practical AI applications. It also underscores key research priorities and the necessity for interdisciplinary collaboration to propel the future of AI innovation.

Graphical abstract

Keywords

memristors / artificial intelligence / in-memory computing

Cite this article

Download citation ▾
Shu Wang, Jia Wei, Lena Du, Cong Wang, Guozhong Zhao. Challenges and opportunities for memristors in modern AI computing paradigms. Front. Phys., 2026, 21(3): 035401 DOI:10.15302/frontphys.2026.035401

登录浏览全文

4963

注册一个新账户 忘记密码

A memristor is a non-linear two-terminal fundamental passive circuit element that links charge and magnetic flux, featuring non-volatile resistance modulation dependent on charge flow [1]. By leveraging these properties, a resistive random-access memory (RRAM) array built with memristors in a crossbar architecture can realize highly parallelized multiply-accumulate (MAC) in-memory computing and effectively circumvent the von Neumann bottleneck [2, 3]. Such an architecture facilitates energy-efficient acceleration for vector/matrix operations in AI computing [4] and has garnered extensive research interest from both academia and industry, as shown in Fig.1. RRAM chips for AI inference [5] and training [6] have been demonstrated sequentially in the past few years, and TSMC recently unveiled its commercial-grade RRAM product, marking significant progress toward large-scale applications. With the exponential growth in the scale and complexity of AI models [7], the advent of RRAM technology provides a timely solution to their intensive energy and computing power demands. This underscores the critical need to identify and understand the alignment between AI computational requirements and memristor device specifications, which is a fundamental step toward developing next-generation memristors in the context of AI computing paradigms.
Convolutional neural networks (CNNs) have long served as a widely adopted AI model, used extensively in computer vision and image processing applications. With the emergence of the Transformer model [10], AI has made unprecedented breakthroughs across diverse domains — such as GPT-4 and Gemini, which demonstrate human-like reasoning in text, code, and visual domains. Although the underlying computation remains centered on highly parallelized MAC operations, the attention mechanism of transformer blocks differs from CNN convolution in two fundamental aspects (Fig.2) and poses substantial challenges for in-memory computing systems and their associated memristor devices. Firstly, the attention mechanism generates input-dependent and dynamically updating weights rather than the static weights of CNNs, which necessitates memristor arrays with exceptional endurance and ultra-low-latency, energy-efficient write operations to sustain frequent reconfiguration without performance degradation. Secondly, the global interaction of all sequence elements in the attention mechanism exacerbates the impact of device variability and noise, demanding memristor arrays with exceptional precision and device-to-device uniformity to preserve attention distribution fidelity — far exceeding that required for CNNs. The need to address these dual challenges motivates extensive research into next-generation memristors, driving innovations in material science, device mechanisms, and application paradigms.
Recent publications in Frontiers of Physics highlight significant advances in memristor research, uncovering progress in materials, mechanisms, and applications. These provide critical insights and solutions for optimizing device performance to meet the evolving demands of AI computing, as shown in Fig.3. In a typical sandwich-structured memristor, the insulating active layer largely determines the properties and performance of the device, making it one of the key research directions for novel memristors. Collectively, the research presented in Refs. [1115] covers diverse material systems, including metal oxides, electronically functional ceramics, and transition metal dichalcogenides. These works demonstrate significant enhancements in memristive performance through systematic investigation of composition, doping strategies, and synthesis/fabrication methodologies. Moreover, reviews published in Refs. [1618] provide in-depth analyses of perovskite-based and low-dimensional memristive systems. These works elucidate underlying mechanisms, optimization strategies, and application prospects, offering comprehensive perspectives and professional insights into memristor development. Beyond materials research, articles such as Ref. [19] dive deeper into resistive switching mechanisms, revealing the origins of memristive behavior and their influences. Meanwhile, reviews like Ref. [20] extend the discussion to computing applications of memristors, demonstrating how different materials and device structures influence endurance, latency, energy consumption, and device uniformity. The above-mentioned research in Frontiers of Physics advances memristor technology by optimizing materials and device performance, addressing critical AI computing needs such as energy efficiency and precision. These studies provide essential guidance for developing next-generation memristors, enabling efficient in-memory computing for modern AI paradigms.
Although numerous synergistic advancements in memristor technology have been presented and even industrial products have been produced, further breakthroughs are imperative to meet the escalating demands of advanced AI computing. The key research priorities include the development of robust switching mechanisms with enhanced stability and precision, along with the creation of design frameworks that bridge nanoscale device physics with system-level performance requirements. This perspective underscores the necessity for continued interdisciplinary collaboration among materials science, device physics, and AI system design to advance memristor-based in-memory computing and propel the next wave of AI innovation.

References

[1]

D. B. Strukov , G. S. Snider , D. R. Stewart , and R. S. Williams , The missing memristor found, Nature 453(7191), 80 (2008)

[2]

C. Li , Z. Wang , M. Rao , D. Belkin , W. Song , H. Jiang , P. Yan , Y. Li , P. Lin , M. Hu , N. Ge , J. P. Strachan , M. Barnell , Q. Wu , R. S. Williams , J. J. Yang , and Q. Xia , Long short-term memory networks in memristor crossbar arrays, Nat. Mach. Intell. 1(1), 49 (2019)

[3]

S. Li,M. E. Pam,Y. Li,L. Chen,Y. C. Chien,X. Fong,D. Chi,K. W. Ang, Wafer‐scale 2D hafnium diselenide based memristor crossbar array for energy‐efficient neural network hardware, Adv. Mater. 34(25), 2103376 (2022)

[4]

A. Sebastian , M. Le Gallo , R. Khaddam-Aljameh , and E. Eleftheriou , Memory devices and applications for in-memory computing, Nat. Nanotechnol. 15(7), 529 (2020)

[5]

P. Yao , H. Wu , B. Gao , J. Tang , Q. Zhang , W. Zhang , J. J. Yang , and H. Qian , Fully hardware-implemented memristor convolutional neural network, Nature 577(7792), 641 (2020)

[6]

W. Zhang , P. Yao , B. Gao , Q. Liu , D. Wu , Q. Zhang , Y. Li , Q. Qin , J. Li , Z. Zhu , Y. Cai , D. Wu , J. Tang , H. Qian , Y. Wang , and H. Wu , Edge learning using a fully integrated neuro-inspired memristor chip, Science 381(6663), 1205 (2023)

[7]

G. Sastry,L. Heim,H. Belfield,M. Anderljung,M. Brundage,J. Hazell,C. O'Keefe,G. K. Hadfield,R. Ngo,K. Pilz,G. Gor,E. Bluemke,S. Shoker,J. Egan,R. F. Trager,S. Avin,A. Weller,Y. Bengio,D. Coyle, Computing power and the governance of artificial intelligence, arXiv: 2024)

[8]

M. A. Zidan,J. P. Strachan,W. D. Lu, The future of electronics based on memristive systems, Nat. Electron. 1(1), 22 (2018)

[9]

S. Wang , Z. Zhou , F. Yang , S. Chen , Q. Zhang , W. Xiong , Y. Qu , Z. Wang , C. Wang , and Q. Liu , All-atomristor logic gates, Nano Res. 16(1), 1688 (2023)

[10]

A. Vaswani,N. Shazeer,N. Parmar,J. Uszkoreit,L. Jones,A. N. Gomez,Ł. Kaiser,I. Dolosukhin, Attention is all you need, in: Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, California, USA, Curran Associates Inc. 2017, pp 6000–10

[11]

J. Yang , Z. Jian , Z. Wang , J. Zhao , Z. Zhou , Y. Sun , M. Hao , L. Wang , P. Liu , J. Wang , Y. Pei , Z. Zhao , W. Wang , and X. Yan , HfAlO-based ferroelectric memristors for artificial synaptic plasticity, Front. Phys. (Beijing) 18(6), 63603 (2023)

[12]

S. Lee , J. Kim , and S. Kim , Self-aligned TiOx-based 3D vertical memristor for a high-density synaptic array, Front. Phys. (Beijing) 19(6), 63203 (2024)

[13]

X. Yan , X. Han , Z. Fang , Z. Zhao , Z. Zhang , J. Sun , Y. Shao , Y. Zhang , L. Wang , S. Sun , Z. Guo , X. Jia , Y. Zhang , Z. Guan , and T. Shi , Reconfigurable memristor based on SrTiO3 thin-film for neuromorphic computing, Front. Phys. (Beijing) 18(6), 63301 (2023)

[14]

X. Yan,Z. Zhang,Z. Guan,Z. Fang,Y. Zhang,J. Zhao,J. Sun,X. Han,J. Niu,L. Wang,X. Jia,Y. Shao,Z. Zhao,Z. Guo,B. Bai, A high-speed true random number generator based on Ag/SiNx/n-Si memristor, Front. Phys. (Beijing) 19(1), 13202 (2024)

[15]

W. Chu , X. Zhou , Z. Wang , X. Fan , X. Guo , C. Li , J. Yue , F. Ouyang , J. Zhao , and Y. Zhou , Stable alkali halide vapor assisted chemical vapor deposition of 2D HfSe2 templates and controllable oxidation of its heterostructures, Front. Phys. (Beijing) 19(3), 33212 (2024)

[16]

W. Niu , G. Ding , Z. Jia , X. Q. Ma , J. Y. Zhao , K. Zhou , S. T. Han , C. C. Kuo , and Y. Zhou , Recent advances in memristors based on two-dimensional ferroelectric materials, Front. Phys. (Beijing) 19(1), 13402 (2024)

[17]

S. Liu , J. Zeng , Q. Chen , and G. Liu , Recent advances in halide perovskite memristors: From materials to applications, Front. Phys. (Beijing) 19(2), 23501 (2024)

[18]

Z. Zhou , F. Yang , S. Wang , L. Wang , X. Wang , C. Wang , Y. Xie , and Q. Liu , Emerging of two-dimensional materials in novel memristor, Front. Phys. (Beijing) 17(2), 23204 (2021)

[19]

W. Wang and G. Zhou , Moisture influence in emerging neuromorphic device, Front. Phys. (Beijing) 18(5), 53601 (2023)

[20]

H. Chen , X. G. Tang , Z. Shen , W. T. Guo , Q. J. Sun , Z. Tang , and Y. P. Jiang , Emerging memristors and applications in reservoir computing, Front. Phys. (Beijing) 19(1), 13401 (2024)

[21]

M. Naqi , M. S. Kang , N. liu , T. Kim , S. Baek , A. Bala , C. Moon , J. Park , and S. Kim , Multilevel artificial electronic synaptic device of direct grown robust MoS2 based memristor array for in-memory deep neural network, npj 2D Mater. Appl. 6, 53 (2022)

[22]

J. J. Yang , D. B. Strukov , and D. R. Stewart , Memristive devices for computing, Nat. Nanotechnol. 8(1), 13 (2013)

[23]

M. Wang , S. Cai , C. Pan , C. Wang , X. Lian , Y. Zhuo , K. Xu , T. Cao , X. Pan , B. Wang , S. J. Liang , J. J. a Yang , P. Wang , and F. Miao , Robust memristors based on layered two-dimensional materials, Nat. Electron. 1(2), 130 (2018)

[24]

P. M. Sheridan , F. Cai , C. Du , W. Ma , Z. Zhang , and W. D. Lu , Sparse coding with memristor networks, Nat. Nanotechnol. 12(8), 784 (2017)

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (3765KB)

1021

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/