A memristor is a non-linear two-terminal fundamental passive circuit element that links charge and magnetic flux, featuring non-volatile resistance modulation dependent on charge flow [
1]. By leveraging these properties, a resistive random-access memory (RRAM) array built with memristors in a crossbar architecture can realize highly parallelized multiply-accumulate (MAC) in-memory computing and effectively circumvent the von Neumann bottleneck [
2,
3]. Such an architecture facilitates energy-efficient acceleration for vector/matrix operations in AI computing [
4] and has garnered extensive research interest from both academia and industry, as shown in Fig.1. RRAM chips for AI inference [
5] and training [
6] have been demonstrated sequentially in the past few years, and TSMC recently unveiled its commercial-grade RRAM product, marking significant progress toward large-scale applications. With the exponential growth in the scale and complexity of AI models [
7], the advent of RRAM technology provides a timely solution to their intensive energy and computing power demands. This underscores the critical need to identify and understand the alignment between AI computational requirements and memristor device specifications, which is a fundamental step toward developing next-generation memristors in the context of AI computing paradigms.
Convolutional neural networks (CNNs) have long served as a widely adopted AI model, used extensively in computer vision and image processing applications. With the emergence of the Transformer model [
10], AI has made unprecedented breakthroughs across diverse domains — such as GPT-4 and Gemini, which demonstrate human-like reasoning in text, code, and visual domains. Although the underlying computation remains centered on highly parallelized MAC operations, the attention mechanism of transformer blocks differs from CNN convolution in two fundamental aspects (Fig.2) and poses substantial challenges for in-memory computing systems and their associated memristor devices. Firstly, the attention mechanism generates input-dependent and dynamically updating weights rather than the static weights of CNNs, which necessitates memristor arrays with exceptional endurance and ultra-low-latency, energy-efficient write operations to sustain frequent reconfiguration without performance degradation. Secondly, the global interaction of all sequence elements in the attention mechanism exacerbates the impact of device variability and noise, demanding memristor arrays with exceptional precision and device-to-device uniformity to preserve attention distribution fidelity — far exceeding that required for CNNs. The need to address these dual challenges motivates extensive research into next-generation memristors, driving innovations in material science, device mechanisms, and application paradigms.
Recent publications in
Frontiers of Physics highlight significant advances in memristor research, uncovering progress in materials, mechanisms, and applications. These provide critical insights and solutions for optimizing device performance to meet the evolving demands of AI computing, as shown in Fig.3. In a typical sandwich-structured memristor, the insulating active layer largely determines the properties and performance of the device, making it one of the key research directions for novel memristors. Collectively, the research presented in Refs. [
11−
15] covers diverse material systems, including metal oxides, electronically functional ceramics, and transition metal dichalcogenides. These works demonstrate significant enhancements in memristive performance through systematic investigation of composition, doping strategies, and synthesis/fabrication methodologies. Moreover, reviews published in Refs. [
16−
18] provide in-depth analyses of perovskite-based and low-dimensional memristive systems. These works elucidate underlying mechanisms, optimization strategies, and application prospects, offering comprehensive perspectives and professional insights into memristor development. Beyond materials research, articles such as Ref. [
19] dive deeper into resistive switching mechanisms, revealing the origins of memristive behavior and their influences. Meanwhile, reviews like Ref. [
20] extend the discussion to computing applications of memristors, demonstrating how different materials and device structures influence endurance, latency, energy consumption, and device uniformity. The above-mentioned research in
Frontiers of Physics advances memristor technology by optimizing materials and device performance, addressing critical AI computing needs such as energy efficiency and precision. These studies provide essential guidance for developing next-generation memristors, enabling efficient in-memory computing for modern AI paradigms.
Although numerous synergistic advancements in memristor technology have been presented and even industrial products have been produced, further breakthroughs are imperative to meet the escalating demands of advanced AI computing. The key research priorities include the development of robust switching mechanisms with enhanced stability and precision, along with the creation of design frameworks that bridge nanoscale device physics with system-level performance requirements. This perspective underscores the necessity for continued interdisciplinary collaboration among materials science, device physics, and AI system design to advance memristor-based in-memory computing and propel the next wave of AI innovation.