Deep learning in two-dimensional materials: Characterization, prediction, and design

Xinqin Meng , Chengbing Qin , Xilong Liang , Guofeng Zhang , Ruiyun Chen , Jianyong Hu , Zhichun Yang , Jianzhong Huo , Liantuan Xiao , Suotang Jia

Front. Phys. ›› 2024, Vol. 19 ›› Issue (5) : 53601

PDF (18419KB)
Front. Phys. ›› 2024, Vol. 19 ›› Issue (5) : 53601 DOI: 10.1007/s11467-024-1394-7
REVIEW ARTICLE

Deep learning in two-dimensional materials: Characterization, prediction, and design

Author information +
History +
PDF (18419KB)

Abstract

Since the isolation of graphene, two-dimensional (2D) materials have attracted increasing interest because of their excellent chemical and physical properties, as well as promising applications. Nonetheless, particular challenges persist in their further development, particularly in the effective identification of diverse 2D materials, the domains of large-scale and high-precision characterization, also intelligent function prediction and design. These issues are mainly solved by computational techniques, such as density function theory and molecular dynamic simulation, which require powerful computational resources and high time consumption. The booming deep learning methods in recent years offer innovative insights and tools to address these challenges. This review comprehensively outlines the current progress of deep learning within the realm of 2D materials. Firstly, we will briefly introduce the basic concepts of deep learning and commonly used architectures, including convolutional neural and generative adversarial networks, as well as U-net models. Then, the characterization of 2D materials by deep learning methods will be discussed, including defects and materials identification, as well as automatic thickness characterization. Thirdly, the research progress for predicting the unique properties of 2D materials, involving electronic, mechanical, and thermodynamic features, will be evaluated succinctly. Lately, the current works on the inverse design of functional 2D materials will be presented. At last, we will look forward to the application prospects and opportunities of deep learning in other aspects of 2D materials. This review may offer some guidance to boost the understanding and employing novel 2D materials.

Graphical abstract

Keywords

deep learning / two-dimensional materials / materials identification / thickness characterization / prediction / inverse design / convolutional neural networks / generative adversarial networks

Cite this article

Download citation ▾
Xinqin Meng, Chengbing Qin, Xilong Liang, Guofeng Zhang, Ruiyun Chen, Jianyong Hu, Zhichun Yang, Jianzhong Huo, Liantuan Xiao, Suotang Jia. Deep learning in two-dimensional materials: Characterization, prediction, and design. Front. Phys., 2024, 19(5): 53601 DOI:10.1007/s11467-024-1394-7

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Ever since the successful isolation of monolayer graphene by Geim et al. [1], 2D materials have emerged as focal points in physics, chemistry, and other disciplines. 2D materials typically exhibit outstanding optical [2-6], electronic [7-11], thermal [12, 13], magnetic [14-16], and mechanical [17, 18] properties, and thus they have significant application prospects in photovoltaics [19-23], light-emitting [24-28], electronic devices [29-35], energy storage [36-41], catalysts [42-47], biomedicine [48-54], and sensors [55-61]. At present, the research on 2D materials is in the stage of rapid development. A large number of new 2D materials, including noble‐transition‐metal dichalcogenides (PdSe2, PtSe2, PdS2, PtS2, etc.) [62-73], emerging single-element compounds Xenes (tellurene, selenene, etc.) [74-78], and perovskite oxides [79-81], have been synthesized successively, greatly enriching the quantity and variety of 2D materials. According to recent high-throughput computations based on density functional theory (DFT) [82] by Mounet et al. [83], over 5000 layered materials may exist, meaning there is huge space for the exploration of 2D materials. In addition, stacking different 2D material layers into van der Waals heterostructures [84-88], not only provides new avenues for the development of novel materials but also opens the way for the exploration of a large number of new properties.

Although the research on 2D materials has rapidly developed in the past two decades, it is also apparent that traditional experimental and computational methods are challenging to meet the increasing development needs of 2D materials. High-precision, high-throughput, and high-efficiency experimental and computational techniques are necessary for gaining in-depth understanding of 2D materials. However, conventional experimental methods face challenges of insufficient measurement precision and cumbersome processes. Similarly, traditional computational methods often involve a huge amount of computation and complex computational tasks, requiring a lot of time and high computational costs. For instance, DFT calculations based on the first principle demand highly optimized computational methods and powerful computational resources, and the computational cost increases rapidly with the increasing number of atoms, rendering such calculations very expensive for large-scale material systems. In recent years, the vigorous advancement of deep learning has brought transformative impacts across various fields [89-93]. Deep learning has also been widely used in 2D materials, providing an effective solution to overcome the limitations of traditional experimental and computational methods. Deep learning [91, 94, 95] is a machine learning method based on multi-layered neural networks, which can automatically, efficiently, and accurately learn the features and patterns of data. In the realm of 2D materials, deep learning has been used to reveal the hidden complex relationships among material formation mechanisms, atomic structures, and material properties, becoming a powerful tool for exploring 2D materials. This review focuses on the applications of deep learning in the domain of 2D materials. Firstly, the basic concepts of deep learning will be introduced, and several deep learning architectures commonly used in the 2D materials domain are listed. Then, we will present the applications of deep learning in 2D materials characterization, encompassing defects and materials identification, as well as thickness characterization. On this basis, the utilization of deep learning in predicting the physical and chemical properties of 2D materials and designing new 2D materials will be briefly given. Finally, the challenges and opportunities of deep learning in future research on 2D materials are discussed.

2 Brief introduction to deep learning

Since the 1950s, Artificial Intelligence has been an important research field in computer science, aiming to develop computer systems capable of emulating human intelligent behavior. Machine learning [96], a significant branch of AI, has made substantial advancements in the last decades, leading to revolutionary advances across numerous application domains. The primary objective is to explore the utilization of statistical learning techniques and optimization algorithms to train models, enabling models to learn patterns and rules from data for autonomous decision-making and prediction. As shown in Fig.1(a), the neural network [97-99], a subfield of machine learning, is a computational model inspired by biological systems. It aims to replicate the intricate structure and functionality of biological neural networks to achieve pattern recognition, classification, regression, and decision-making based on non-linear data. Deep learning emerges from the foundation of neural networks, which use multiple layers of neural networks to achieve the ability to learn feature representations from large amounts of data automatically. It is mainly characterized by a deeper network structure and more powerful computational capabilities. Compared to traditional machine learning methods that require a manual feature extraction process, deep learning automatically uses a hierarchical structure to extract high-level abstract features from raw data. It has more robust data-driven capabilities that significantly improve classification, recognition, and prediction accuracy. Thus, deep learning is widely used in feature extraction and performance prediction of 2D materials. In this section, we will briefly introduce deep learning architectures commonly employed in the field of 2D materials research.

2.1 Convolutional neural networks

The convolutional neural networks (CNN) [100], was initially proposed by Fukushima in 1988. Lecun et al. [101] pioneered the application of a gradient-based learning algorithm to CNN, achieving notable success in handwritten digit classification. Since then, CNN has been widely used in many fields. A CNN primarily comprises two components, the feature extractor, and the classifier, as depicted in Fig.1(b). The feature extractor consists of several convolutional and pooling layers, which are mainly used to extract high-level features from input data, enabling the subsequent classifier to achieve more accurate classification. The classifier consists of fully connected layers, receiving the high-level abstract features extracted by the feature extractor and mapping them to specific output classes. The convolutional layer is a fundamental component of the CNN, which comprises multiple convolutional kernels. Convolutional kernels are weight matrices describing the importance of the input content to the output content at a particular location, with the purpose of capturing specific local patterns or features in the input. The parameters of kernels are learned during the training process, which allows the network to adapt to different tasks. The method of sliding a kernel across the entire input to generate a matrix is termed convolution. Following the convolution calculation by all convolutional kernels, an equal number of matrices, termed feature maps, are produced. These feature maps represent the features extracted by the convolutional layer. Pooling layers typically succeed the convolutional layers used for reducing the spatial dimensions of feature maps produced by convolutional layers, thereby decreasing the number of parameters and computations. Finally, the fully connected layers map the feature maps post-convolution and pooling to specific output classes, yielding the final prediction.

2.2 Generative adversarial networks

Generative adversarial networks (GAN) is a kind of deep learning model introduced by Goodfellow in 2014 [102, 103]. As illustrated in Fig.1(c), GAN consists of a generator G and a discriminator D. The generator is responsible for generating synthetic data samples that approximate the underlying distribution of the real data, while the discriminator is responsible for distinguishing the real and generated data. The generator usually takes random noise as input, and then the generated sample G(z) is output by mapping the noise to a new data space. On the other hand, the discriminator receives both real data samples from the actual dataset and generated data from the generator as input. The output of the discriminator is a probability score, indicating the probability that the input data is the real data. The training process of GAN can be summarized as an adversarial game between the generator and discriminator. GAN training diverges from single-structure network training, employing distinct alternating iterative approaches. During the generator training, the discriminator remains constant. The generator constantly generates new samples and adjusts its parameters based on the feedback from the discriminator, aiming to produce samples that closely resemble the real data. During the training of the discriminator, the generator remains fixed. The discriminator refines its parameters by continuously judging the real and generated samples, ultimately becoming more accurate at distinguishing between the real and generated data. Through continuous iterations, the generator learns to produce samples increasingly similar to the real data, while the discriminator becomes more accurate at distinguishing between the real and generated samples. The network reaches an optimal state when the discriminator cannot discern whether the data come from the real dataset or the generator.

2.3 U-net

The U-net was proposed by Ronneberger et al. [104] in 2015 for image segmentation, and its network architecture is shown in Fig.1(d). Generally, the U-net is composed of an encoder, a decoder, and a skip connection. The encoder employs convolutional blocks, succeeded by pooling layer for downsampling, to extract feature representations at different hierarchical levels from the input image. Each encoder layer consists of two successive 3 × 3 convolutional layers, a linear activation function, and a 2 × 2 max pooling layer, ultimately diminishing the image to a compact feature map. The purpose of the decoder is to semantically project the lower-resolution features obtained from the encoder into a higher-resolution pixel space, ultimately achieving accurate segmentation. The decoder first upsamples the feature map at each layer by a 2 × 2 upsampling convolution. Subsequently, skip connections concatenate the feature maps from the corresponding layers in both the encoder and the decoder. The effect of the encoder and decoder enables the model to retain both global features acquired from the downsampling path and local features learned from the upsampling path, thereby enhancing the precision of image segmentation. Following the integration of features, two consecutive 3 × 3 convolutional layers and a linear activation are employed. In the last layer of the decoder, an additional 1 × 1 convolution is used to reduce the feature map to the desired number of channels and generate the segmented image. The U-net architecture features a distinct U-shaped pattern, facilitating the propagation of contextual information throughout the network. The architecture efficiently exploits contextual information from larger, overlapping regions, resulting in more accurate segmentation. U-net has made significant achievements in the field of medical image segmentation and is one of the classic models in the area of image segmentation.

3 Deep learning in the characterization of 2D materials

The structural information of 2D materials, including defects, thicknesses, and morphology, holds important significance in understanding their physical and chemical attributes, as well as in broadening related applications. Both experimental and theoretical approaches have been employed to characterize the structures of 2D materials. The experimental techniques consist of optical microscopy (OM) [105-107], transmission electron microscopy (TEM) [108, 109], scanning transmission electron microscopy (STEM) [110, 111], atomic force microscopy (AFM) [112-114], Raman spectroscopy [115-117], and reflection contrast spectroscopy [118, 119]. Theoretical methods comprise molecular dynamics (MD) simulations and DFT calculations. However, the above traditional characterization methods have certain limitations, such as expensive computational resources and biases introduced by manual analysis. Combining deep learning with traditional characterization methods can potentially overcome their inherent limitations and address some bottlenecks. In this section, we will introduce the research progress of deep learning in defects identification, material identification, and thickness characterization of 2D materials.

3.1 Defects identification

Defects in 2D materials, including vacancies, doping, and edge defects, have an essential effect on the properties of the materials [120]. Precisely regulating defects in lattice structures can manipulate the electronic, mechanical, and magnetic properties of 2D materials. Hence, accurately identifying defects at the atomic level in 2D materials is desirable. In recent years, with the continuous advancement of high-resolution imaging and characterization techniques (e.g., AFM and TEM), it is possible to obtain microscopic information about materials, such as the distribution and types of defects. These characterizations can also uncover the dynamics related to the material properties at atomic-level spatial resolution and sub-second temporal resolution. Although the capacity for gathering material data with high spatiotemporal resolution has improved, the properties of the 2D material inferred from these advanced characterization images remain constrained by manual data analysis. Therefore, these high-quality data are only used for qualitative research. Relying solely on manual analysis makes it difficult to promptly and precisely extract all information from the images, resulting in a large amount of data being discarded and wasted. It is evident that the constraints of manual analysis hinder the in-depth application of advanced characterization techniques. Thus, there is a pressing need to automatically and intelligently extract more information concerning the dynamics and interactions of individual defects from high-quality images.

In 2017, Ziatdinov et al. [121] developed a deep neural network-based workflow for atomic-resolution STEM images. As illustrated in Fig.2(a), the network is based on a fully convolutional network (FCN). The network can effectively use the limited prior information about possible defect types to extract the coordinates of all atomic species in the image. These coordinates are then employed to identify various defect structures that were not in the training set. As shown in Fig.2(b), the trained network first outputs a probability map for each pixel being an atom. Then, the map is thresholded at a specific value to produce a binary image. Finally, the Laplacian of Gaussian algorithm is applied to calculate the coordinates of the atomic centers in the image. It’s worth noting that the model was trained using simulated image data, but the trained model can effectively process experimental images. Furthermore, they combined deep learning, Laplacian of Gaussian algorithm, and simple graph representations to extract relevant structural such as bond lengths and angles and classify defects based on chemical structures.

Later, Madsen et al. [122] proposed a CNN-based method for recognizing local atomic structures in TEM images. This network demonstrated the capability to identify local atomic structures in TEM images and was trained using simulated images as well. It also provides reliable predictions for experimental images. They conducted validation tests on atomic-resolution TEM images of single-layer defective graphene and metallic nanoparticles. The results demonstrated that the network can accurately identify atoms under diverse microscope conditions and exhibits robustness to the variations of microscope parameters. Furthermore, this work suggests the potential of neural networks in classifying atomic column heights in TEM images. However, due to the requirement for a large amount of high-quality experimental data for validation, the feasibility of this method has not been validated on experimental images. On the other hand, Maksov et al. [123] developed a deep learning framework based on dynamic STEM imaging. The aim was to identify and track the evolving changes in defect structures and the phase transition in Mo-doped layered WS2. The network is trained using only the first frame from the sequence of images obtained from STEM. The dataset preparation process is illustrated in Fig.2(c). A fast Fourier transform is initially applied to the original experimental image. Subsequently, a high-pass filter is used to eliminate nonperiodic components of the lattice. The resulting image then undergoes an inverse fast Fourier transform to obtain the periodic image. Following this processing, the original image is subtracted from the periodic image, revealing deviations from the ideal periodic lattice. Finally, the difference in the image is thresholded to identify the locations of individual defects, which will serve as the ground truth image. The trained framework can extract thousands of lattice defects from the original STEM image sequence data within seconds and can be extended to the remaining frames. The extracted defects were further grouped into five clusters by the Gaussian mixture model, as shown in Fig.2(d). The authors further analyzed the distribution and temporal-spatial trajectories of these defects and categorized the dynamic changes of defects into three types: weakly mobile trajectories, strong diffusion, and unrelated events. The results suggest that two defect types associated with Mo doping [Classes 1 and 3 shown in Fig.2(d)] can switch reversibly along their movement trajectories. These two defects were further classified into four types of complexes (Mow + VS) and Mow defects non-coupled with S vacancies. The Markov matrices of the four subclasses mentioned above indicate a possible coupling between Mow defects and S vacancies. Maksov et al. [123] proposed that this coupling may be attributed to two factors: the low diffusion barrier of S vacancies and the higher likelihood of S sublattice atoms being ejected during electron beam irradiation. This suggests the capture of S vacancies near the Mo dopant, resulting in a transition between Mow and (Mow + VS) defect types. Additionally, this work also investigated the diffusion characteristics of S vacancy defects [as category 5 shown in Fig.2(d)], revealing diffusion coefficient values in the range of 3 × 10−4 nm2/s to 6 × 10−4 nm2/s. It is evident that deep learning not only enables the identification of point defects in 2D materials and the analysis of defect diffusion coefficients but also provides insights into the transformation pathways and probabilities of defect complexes. In 2023, Yang et al. [124] introduced a dual-mode CNN platform named 2DIP-Net to classify defects in monolayer 2H-MoTe2. They utilized the faster RCNN model to detect hexagonal cells and cropped the detected cells into unit cells in the initial stage. Subsequently, further segregating unit cells into the Te2/Mo column part significantly enhanced the accuracy of defect classification. The proposed model achieved an accuracy of over 97.87% for the classification of various defects.

In the quantitative study of doping and defects in 2D transition metal dichalcogenides (TMDs), STEM plays a pivotal role. Specifically, the annular dark-field (ADF) imaging mode of STEM can offer more detailed information, enabling quantitative analysis of minute structures. To achieve more efficient quantitative analysis for ADF images, Yang et al. [125] proposed an automated method based on U-net. This method enables reliable quantitative analysis of dopants and defects in TMDs with single-atom accuracy. The approach exhibits a measurement accuracy of up to 98% and a detection limit of 1 × 1012 cm−2. Regarding efficiency, the proposed model only requires 3 seconds for quantitative analysis of a 1024 × 1024 pixel STEM image, while accomplishing the same task through a skillful researcher takes approximately 1 hour. The model’s efficiency is 1200 times greater than the current analysis technique performed by humans. Additionally, the automated method shows excellent performance in reducing noise in STEM images and efficiently processing a large number of images. Through this method, they further investigated the dynamic evolution of the structure of TMDs under electron beam irradiation. This work also reveals the spatial distribution, temporal variations, and electron beam irradiation tolerance of point defects and dopants in WSe2, MoS2, V-doped WSe2, and V-doped MoS2 under electron beam irradiation. Fig.2(e) presents the defect evolution of V-doped monolayer WSe2, revealing the dynamic response of vacancy under electron beam stimulus. It is worth noting that, in order to reduce the impact of high statistical noise on quantification accuracy in high-speed imaging, CNN was employed to preprocess the images for denoising. This network reduces the statistical noise and enhances the atomic contrast. Compared to the raw images, the signal-to-noise ratio has been improved by ~14.6 times. Thus, the denoising process significantly enhances the accuracy of subsequent quantitative analysis for classifying and labeling atomic sites.

While the spatial resolution of STEM has achieved atomic levels, the precision of individual atomic structure analysis is limited to the picometer scale. The resultant local strains induced by point defects substitution and long-range strain fields operate at a sub-picometer level, which lies below the detection limit of STEM. Thus, conducting sub-picometer level defect detection through high-resolution experimental characterization remains challenging. In 2020, Lee et al. [126] developed a deep learning model based on the FCN architecture to process STEM images for the localization and classification of single-atom defects in WSe2−2xTe2x. The atomic sites were classified into five categories: the chalcogen columns may hold two Se atoms (without defects), one or two Te substitutions, and one or two Se vacancies. Based on the different classes of defective images output by the model, they generated high signal-to-noise ratio class-averaged images of each defect type. Experimental results have shown that class-averaged images allow for sub-picometer precision measurements of atomic spacings and local strains, achieving a precision of up to 0.2 picometer that cannot be attained with individual images alone.

These advancements in deep learning-based electron microscopy have facilitated automated analysis. However, models trained on simulated images exhibit poor performance when applied to low-quality experimental images, which may involve background noise, aberrations, contamination, poor contrast, and so on. To narrow the gap between simulative training and experimental testing, Chu et al. [127] thoroughly considered various interference factors, such as different noise levels, carbon contamination, and high-order aberrations during the construction of simulated STEM images. They trained two U2-net models using low-quality simulated and experimental Ti3C2Tx STEM images, effectively reducing the model’s reliance on input image resolution. The trained models exhibited excellent performance in identifying experimental STEM images, achieving an overall accuracy of 96.8% in identifying vacancy defects and 99.4% in identifying single atoms. In addition to high recognition accuracy, the model also exhibits impressive identification efficiency, achieving approximately 45 images per second. To summarize, deep learning models can perform efficient and precise automatic analysis of extensive datasets. Compared to traditional manual analysis, deep learning methods can reduce evaluation biases, information in feature extraction, and confidence bias in labeling. These methods significantly enhance the efficiency and accuracy of data analysis, consequently driving scientific research and applications in the field of 2D materials.

3.2 Material identification and thickness characterization

2D materials are typically prepared through methods such as chemical vapor deposition (CVD) or mechanical exfoliation. In the preparation process, layers of 2D sheets with varying thicknesses are randomly deposited onto a substrate. However, 2D materials with varied atomic layer thicknesses exhibit significant differences in optical, physical, chemical, thermal, and electrical properties. Therefore, accurately and efficiently identifying and characterizing the thickness of 2D materials is crucial for both scientific research and industrial applications. Various spectroscopic and microscopic techniques are commonly employed to characterize the atomic layer thickness of 2D materials, including Raman spectroscopy, photoluminescence spectroscopy, reflectance/transmittance spectroscopy, scanning tunneling microscope, and OM, respectively. However, these characterization techniques have inherent limitations. High-performance and large-scale characterization methods have consistently posed primary obstacles in applying 2D materials. Based on the optical contrast between atomic layers and the substrate, OM provides a simple and cost-effective method for measuring the thickness of 2D materials. However, this technique, requiring manual operation and processing, is sensitive to substrate and illumination conditions and relies on calibrated illumination methods. The application of deep learning in the fields of image and visual recognition has become highly mature. Applying deep learning to OM enables the automatic extraction of detailed information from microscopy images, facilitating large-scale characterization of 2D materials. Furthermore, it is cost-effective and highly adaptable, without the need for expensive experimental equipment, and demonstrates exceptional scalability.

In 2019, Wu et al. [128] improved the SegNet network for recognizing 2D materials, achieving an average segmentation accuracy of 97.17%. This result presents a significant improvement compared to the original SegNet network, with an average segmentation accuracy of 92.04%. Yu et al. [129] reported a neural network based on the U-net to identify the thickness of mechanically exfoliated MoS2 and graphene on SiO2/Si substrates, the processing is depicted in Fig.3(a). Under bright-field microscopy at 100 times magnification, a set of 24 images of MoS2 and 30 images of graphene was acquired to serve as the training set. The MoS2 training set was expanded to 960 images by employing augmentation methods, including random cropping, flipping, rotating, and color alteration of the original images. Although the model was trained on a limited amount of real data, the U-net with weighted loss achieved a cross-validation score and accuracy of around 70%−80%. The result demonstrated that the proposed model could successfully distinguish between monolayer and bilayer, suitable for the initial screening process. On the other hand, Han et al. [130] introduced an optical identification neural network (2DMOINet) for 2D materials based on an encoder−decoder architecture in 2020, as illustrated in Fig.3(c). They applied this network to 13 typical 2D materials, including mechanically exfoliated graphene, h-BN, 2H phase semiconducting TMDs, 2H-phase metallic TMDs, 1T-phase TMDs, black phosphorus, metal trihalides, and the quasi-one-dimensional crystal, as shown in Fig.3(b). The network can identify materials in OM images in real-time and is robust to changes in brightness, contrast, white balance, and light field inhomogeneity. Compared to traditional identification methods constrained by specific crystal types and imaging conditions, 2DMOINet based on deep learning displays greater generality. Furthermore, the authors classified the thickness of materials into single-layer, 2‒6 layers, and greater than 6 layers, as depicted in Fig.3(d). The model achieved a classification accuracy mostly of over 70% for materials and thicknesses on the test dataset, with an average classification accuracy of 79.78%. The authors also compared two training methods for different identification tasks: utilizing a pre-trained 2DMOINet as an initialization for the new task and starting with a random initialization. Through the pre-training approach, they found that only 60 images of CVD-grown graphene were required to attain a global accuracy of 67%. In contrast, the random initialization method required 240 images to reach a comparable level. This result demonstrated that the pre-trained 2DMOINet can adapt to optical identification problems with minimal additional training through transfer learning.

Meanwhile, Masubuchi et al. [131] reported a neural network with the instance segmentation model Mask-RCNN to identify the thickness of four 2D materials: graphene, h-BN, WTe2, and MoS2, respectively. They implemented the network on an automated OM to search for 2D materials on SiO2/Si substrates and identify thicknesses in real-time. They established three classifications, including “monolayer” (1 layer), “few layers” (2−10 layers), and “thick layers” (10−40 layers). The output of the network covers the detection bounding boxes, class labels, confidence scores, and segmentation masks, as shown in Fig.3(i) and (j). In the training process, the weights of the network heads are initialized utilizing pre-trained weights obtained on the large-scale object segmentation dataset (MS-COCO dataset [132]). In contrast, the rest of the network weights are initialized randomly. To improve the generalization ability of the model, it is first pre-trained on a mixed dataset containing four types of 2D materials. Then, the weights obtained from pre-training are used as sources, followed by transfer learning on each material subset to accomplish thickness classification. The experimental results further evinced that, compared to models exclusively pre-trained on MS-COCO, the model pre-trained on the mixed training set of 2D material exhibited a swifter reduction in test loss and attained a lower minimum loss value. In 2023, Mahjoubi et al. [133] proposed a deep learning method based on hierarchical deep convolutional neural networks to automatically identify and classify mechanically exfoliated graphene flakes on Si/SiO2 substrates. They employed AFM and Raman spectroscopy to characterize the thickness of the captured optical images and generated pixel-wise thickness maps as the ground truth. The experimental results indicate the model’s robustness to the background color, brightness, and resolution of microscopic images. This is attributed to the optimized adaptive gamma correction method for enhancing image quality before the training. The model achieves a pixel classification accuracy of over 99%, with a minimum IoU value exceeding 56% and an MIoU of 59%.

In 2022, Zhang and colleagues [134] analyzed 16 detailed semantic segmentation models that performed well on public datasets and applied them to layer identification and segmentation for graphene and MoS2. The assessment of these models included an evaluation of their complexity, size, classification accuracy, and segmentation performance, respectively. These parameters are evaluated based on a range of metrics, including Giga Floating-point Operations Per Second (GFOPs), Params, Accuracy, Kappa coefficient, Dice coefficient, and Mean intersection over union (MIoU), respectively. Based on the open-source dataset provided by Saito et al. [129], they divided the labeled images randomly into training and testing datasets in the ratio of 8:2. Inspired by PSPNet [135], they improved U2-net [136] by adding a pyramid pooling module into the encoder output of the first nested residual U-block — this modification aimed to fuse multi-scale and contextual information. The improved model is called 2DU2-net. Comparing the input image and the labeled image output by models, it can be found that 2DU2-net has the best segmentation results in distinguishing the detailed, dispersed, and edge regions of the 2D material, exhibiting finer contour lines than other models. 2DU2-net demonstrates an accuracy of 99.03%, a Kappa coefficient of 95.72%, a Dice coefficient of 96.97%, and a MIoU of 94.18%. Compared to U2-net, most metrics exhibit a significant improvement. Despite U2-net exhibiting some defects and errors in edge segmentation, it displays the fewest parameters. In terms of computation, compatibility, and training deployment, it presents more advantages than other models. Furthermore, experimental results indicated that models with backbone networks tended to have larger GFOPs and parameters. In contrast, lightweight networks based on encoder-decoder structures, such as 2DU2-net and U2-net, demonstrate superior performance and achieve a lightweight computational level regarding computation, parameters, and inference speed.

To solve the problem of image degradation caused by out-of-focus, Dong et al. [137] improved the loss function of GAN and proposed a microscopy image deblurring model. This model enables restoring the structure and color information of out-of-focus, low-quality microscopic images of CVD-grown MoS2. Furthermore, they utilized an optimized U-net for segmentation and layer identification on the restored images. As depicted in Fig.3(e), experimental results indicate significantly improved segmentation accuracy in the restored images compared to experimentally out-of-focus ones. In the same year, Zhu et al. [138] reported a pixel-based supervised artificial neural network (ANN) model, which utilized the six primary color channels: red, green, blue (i.e., RGB), as well as hue, saturation, and value (i.e., HSV) from OM images as input to distinguish and characterize various 2D materials. The model identified 8 types of monolayer and bilayer 2D materials across different imaging conditions with more than 90% accuracy, compared to an average accuracy of 49% for identifications by trained researchers. In addition, as shown in Fig.3(f)‒(h), the model can identify the chemical compositions and interface distributions of the CVD-grown MoS2/WS2 van der Waals heterostructures. Compared with the mapping that takes several hours to demonstrate by Raman spectroscopy, this method dramatically improves the characterization efficiency. Combined with a ternary regression model, the model was also able to identify sulfur vacancy defect concentrations in CVD-grown MoS2. Compared to traditional optical microscopy based on RGB color information, multispectral imaging microscopy can acquire more abundant spectral information. In 2023, Dong et al. [139] developed a multispectral microscopy framework to characterize the thickness of ultra-thin atomic crystals. Similar to the previous study [137], the framework includes image acquisition, the restoration of out-of-focus images, and the segmentation of the restored images. The Dice coefficients from the well-trained model surpass 80% in classifying all categories.

4 Deep learning in the prediction of 2D materials

In contrast to three-dimensional bulk materials, 2D materials are only composed of single or several layers of atoms. Their physical and chemical properties can be precisely tuned by compositional adjustments, defect engineering, surface doping, phase transition processes, thickness modulation, and chemical modification. Thus, 2D materials have great exploration space. Employing deep learning methods to predict the properties of 2D materials can significantly accelerate research progress and reduce research costs. Below, we will introduce the research progress in combining deep learning with computational simulation methods to predict the electronic, mechanical, and thermal properties of 2D materials.

4.1 Electronic properties

The applications of 2D materials are extremely relevant to their electronic properties, with the band gap playing a central role. For instance, graphene has excellent electrical conductivity but a zero bandgap feature, leading to poor switching function for circuit control. This shorting hinders the applications of graphene in digital circuits, semiconductor devices, and optoelectronic devices. In contrast, MoS2, with a direct bandgap of about 1.89 eV, is an ideal 2D semiconductor. With a bandgap of about 5.9 eV, h-BN is a typical wide bandgap semiconductor and is thus widely used as an insulating layer in semiconductor devices. Therefore, fast and accurate prediction of the bandgap of 2D materials holds great importance for their application across various fields. Combining deep learning with DFT computational methods can realize low-cost and high-precision bandgap prediction.

In 2019, Nemnes et al. [140] combined DFT calculations and ANN to predict the bandgap of hybrid h-BN graphene. They generated 900 non-equivalent structures of h-BN graphene, each containing 200 atoms of C, N, B, and H, with a fixed 34 hydrogen atoms in each system, as illustrated in Fig.4(a). The first ANN model was constructed by 166 input neurons, corresponding to the structure of C, N, and B atoms. Then, the second model considered the chemical neighborhoods of specific atom types. The number of input neurons is reduced from 200 to 20, with four neurons representing the corresponding proportion of C, B, N, and H atoms. While the remaining 16 neurons represent the normalization counts of quadruplet atom combinations (Xi, Y1, Y2, Y3, where Xi = C, B, and N; Y1, Y2, and Y3 represents the three closest neighboring atoms to Xi). This design of the input layer allows for the input to be independent of the structure size, and thus allows for the processing of structures with different dimensions. Both trained ANN models demonstrated excellent performance in predicting bandgaps. The determination coefficients were 99.7% and 97.5% on the training set and 95% and 88.8% on the test set for the two models, respectively, as depicted in Fig.4(b) and (c). In 2020, Dong et al. [141] proposed three neural networks for predicting the bandgaps of hybrid h-BN graphene with randomly configured supercells. Different configurations generated by geometric methods and the corresponding bandgap calculated by DFT are used as training dataset. Initially, a VGG16 convolutional network was constructed, incorporating 12 convolutional layers, a global-flatten layer, three fully connected layers, and an output layer. However, the network exhibited rapid saturation and subsequently degraded accuracy. Furthermore, they introduced residual convolutional networks (RCN) and concatenate convolutional networks (CCN). The structure of RCN is similar to ResNet50. Similar to RCN, the concatenation blocks are also introduced into CCN. The DFT calculation results serve as true values for evaluating the predicted bandgaps of the 4 × 4 supercell system using different models. Experimental results demonstrate that the proposed three models can predict the bandgap with a relative error of less than 10% in over 90% of cases. In contrast, the prediction results of the support vector machine (SVM) [142] significantly deviated from the true values, exhibiting errors exceeding 20% in over half of cases, as shown in Fig.4(d). Meanwhile, the predicted value of the three networks indicates a strong direct linear correlation with the true value, while the correlation between the SVM and the true value is very weak, as shown in Fig.4(e).

In 2022, Ma et al. [143] employed CNN to predict the formation energy of defective graphene. To precisely depict structures and distributions of various defects, they proposed a descriptor of a three-dimensional matrix constructed by the voxelization method, taking the chemical bond between atoms as the description unit. The chemical construction parameter matrix involves the bond position matrix, bond length matrix, and bond angle matrix. The dataset covers a range of defect types, including single vacancy (SV), Stone–Wales defects (SW), and double vacancy (DV). During the dataset preparation, as shown in Fig.4(f), to avoid interaction between different defects affecting the formation energy, it is necessary to ensure that the defect distance between defects in distinct cells is greater than 15 Å. Then, diverse types of defects are randomly selected and spliced to generate different structures. The corresponding formation energy can be approximated as the sum of the formation energy of each defect, where the formation energy of a single defect is calculated by DFT as the true value. The prepared dataset is augmented by translation and rotation of the structure description matrix along a lattice axis. The trained model performs well in predicting the formation energy of defective graphene, with a coefficient of determination of 0.998 and a mean absolute error of 0.51 eV. Furthermore, the sensitivity of model prediction accuracy to the defect type and interaction distance has also been investigated. For diverse defects, the double vacancy shows the largest prediction error, 0.12 eV different from the true value obtained by DFT calculations. As shown in Fig.4(g), under diverse defect distances, the model can accurately predict the total energy of different defect combinations, and the prediction error of less than 0.3 eV. Finally, they investigated the generalization ability of the model to unknown defects, and extended the proposed three-layer descriptor to MoS2. The mean absolute error of the predicted results from the model is 53 meV per 1000 atoms, and the performance is close to graphene, proving that the proposed multi-layer descriptor and CNN model have good generalization ability.

4.2 Mechanical properties

Unlike three-dimensional bulk materials, 2D materials demonstrate exceptional flexibility due to their atomic-level thickness. They find extensive applications in flexible electronic devices, where properties like Young’s modulus and fracture behavior hold significant importance, directly influencing the breadth of applications and the durability of 2D materials. In addition, 2D materials can withstand large mechanical strains, and strain engineering has been proved theoretically and experimentally to modify the band structure, as well as the electronic and photoelectric properties of 2D materials. Consequently, it is crucial to evaluate the mechanical properties of 2D materials. Experimentally, in situ microscopies such as AFM, scanning electron microscopy, and TEM are commonly employed to characterize the mechanical properties of 2D materials. The AFM nanoindentation test is the most widely used method. During the characterization, the AFM tip is pressed onto the 2D material surface, and a force-displacement curve can be obtained by recording the displacement of the tip and the pressure applied. According to the force-displacement curve, mechanical properties such as Young’s modulus, tensile strength, fracture strength, fraction strain, and friction coefficient of 2D materials can be determined. However, the experiment process is cumbersome and repetitive. In addition, currently available nanoindentation experimental devices face challenges in maintaining minimal penetration depth and may induce inevitable error, and thus unable to reveal the intrinsic mechanical properties. In addition to experimental methods, the mechanical properties can also be characterized by the indentation and tensile calculations using DFT calculations and MD simulations. The main obstacles are expensive computational resources and high time consumption. Employing deep learning to assist in predicting the mechanical properties of 2D materials, which not only can overcome the limitations of either experimentally direct measurement or computational simulation, but also can effectively, economically, and accurately predict the mechanical properties of a large number of 2D materials.

In 2020, Dewapriya et al. [144] employed shallow and deep neural networks to predict the fracture stress of defective graphene under various conditions. The training data for the shallow network is partially obtained based on the analytical solutions of the Bailey durability criterion and the Arrhenius equation. In contrast, the dataset for training the deep CNN is obtained from MD simulations. According to the experimental results, the shallow network performs well when the number of training samples is limited. It can accurately predict the fracture stress of single-vacancy randomly distributed defective graphene. In comparison, deep networks require larger training samples, but can effectively solve more complex problems such as the effect of defect distribution on fracture stress in graphene. On the other hand, understanding crack propagation behavior is of great importance in science and industry, which is essential for the lifespan extension of industrial products. For this purpose, Hsu et al. [145] introduced a model based on convolutional long short-term memory (ConvLSTM) networks to predict the crack propagation path in crystalline Lennard−Jones materials. Based on this work, Lew et al. [146] applied the ConvLSTM model to predict the fracture behavior of graphene. They investigated the parameter calibration process for obtaining meaningful fracture predictions, aiming to attain fracture predictions consistent with MD simulations. In this work, when generating the dataset using MD simulations, the influence of graphene orientation on its fracture behavior was taken into consideration. Thus, the loading direction is set in the simulation from the armchair direction to the zigzag direction in increments of 10°. Each fracture path image with 160 × 120 pixels obtained from MD simulations was segmented into sequential input and output matrix pairs. In total, 5544 matrix pairs were generated as the training dataset. After the convolutional layers learned the geometric features, the long short-term memory layer focused on the sequential relationships along crack propagation. Finally, a dense layer was employed for classification, as shown in Fig.5(a). The effects of input and output widths on the prediction results are analyzed. Experimental results indicated that when the pixel width of the input matrix is increased to 32, the model has a sufficient amount of fracture history. Hence, the prediction results are most consistent with the MD simulation results. According to the analysis results and the length of the graphene armchair unit cell, the input and output matrices with widths of 32 pixels and 2 pixels, respectively, were employed in the experiment to calibrate the proposed model. The calibrated model demonstrates a prediction accuracy of up to 95% for the graphene crack path. The comparison between the predictions and the MD simulation results is illustrated in Fig.5(b). To prove the generalization ability of the model, the calibrated model was also applied to predict the crack paths of bicrystal graphene and arbitrarily oriented graphene. The predictions are closely aligned with the MD simulation results, proving the model has good generalization performance. In addition, the effect of point defects on the fracture path of graphene was also studied, and the comparison showed that when the defect size increases to 3.2 nm, there is a significant deviation in the crack path. This result is consistent with the defect tolerance threshold of nanocrystalline graphene in Ref. [147].

In 2022, Yu et al. [148] improved the ConvLSTM model by introducing 2D convolutional long short-term memory layers to extract more spatial features. This improved model was employed to predict the fracture paths of graphene under various defects. This work demonstrates that the improved model achieved a prediction accuracy of 99% for graphene with diverse orientations and 98% for graphene with different defects, showcasing outstanding generalizability and transferability. During the same year, Elapolu et al. [149] introduced a deep learning model based on CNN and bidirectional recurrent neural networks (Bi-RNNs) for predicting crack propagation paths in polycrystalline graphene under tensile loading. The CNN is utilized to automatically extract crucial features such as grain orientation and grain boundary positions. Additionally, the Bi-RNNs propagate sequential information regarding crack positions and microstructure details, which is employed to forecast the path of crack propagation. They employed algorithms proposed in previous studies [150-152] to generate polycrystalline graphene sheets with a scale of 20 nm × 40 nm. The graphene sheets also feature effective grain sizes spanning 3‒9 nm and grain orientation angles ranging from 0°−60°, with grain boundaries composed of pentagon-heptagon defects. Subsequently, the simulations were conducted to stretch 700 distinct sheets along the y-direction to prepare a dataset of polycrystalline graphene images, as shown in Fig.5(c). Fig.5(d) presents the comparison between the output of the model and the results of MD simulations. The fully grown crack length is used to evaluate the quality of the model prediction performance. Note that the crack length prediction by the model was basically consistent with the MD simulations. As the grain size increased, the difference in crack lengths between the output by MD simulation and the ConvLSTM model decreased, as illustrated in Fig.5(e). This improvement can be attributed to the sparser distribution of grain boundaries within the domain, resulting in reduced kinking along the crack propagation path. In addition, in the grain direction close to 30°, there are two zigzag directions of crack growth, and the growth path is not unique, so the prediction accuracy of the model is relatively low.

Later, Shishir et al. [153] proposed a deep CNN model to extract the average grain size of polycrystalline graphene sheets and predict Young’s modulus and fracture stress. The centroidal Voronoi tessellation method is used to generate initial structures of polycrystalline graphene sheets close to realistic materials. At last, a total of 50 polycrystalline graphene sheets with varying average grain sizes were prepared. To account for the randomness in atomic structure and grain boundary orientation, each grain of different sizes was created with 10 differently oriented atomic structures, resulting in 500 polycrystalline graphene sheets. Through data augmentation, the dataset contains a total of 2000 input images. Using MD simulations, uniaxial tensile simulations were conducted on polycrystalline graphene sheets to obtain stress-strain curves. Then, the average fracture stress and Young’s modulus of the polycrystalline graphene were calculated based on these curves. Fig.5(f) shows that the trained model accurately extracts the average grain size from the polycrystalline graphene images. The coefficient of determination on both the training and testing sets exceeds 0.98. The root mean square errors of Young’s modulus and fracture stress are 21.573 GPa and 2.8101 GPa on the testing set, respectively, while the standard deviations of the two values range from 13.319 to 36.064 GPa and 1.779 to 6.536 GPa, respectively, as depicted in Fig.5(g). It is evident that the error predicted by the model falls within the standard deviation range of MD simulations. These results prove that the model can accurately predict Young’s modulus and fracture stress of polycrystalline graphene, thus avoiding the time and economic cost required by MD simulations. In 2023, Shen et al. [154] utilized the deep convolutional neural network to predict Young’s modulus and tensile strength of defective hBN containing coexisting types of defects. The trained model demonstrates an R2 value of 0.986 for predicting Young’s modulus and 0.894 for predicting tensile strength, respectively. The proposed model demonstrates high predictive accuracy, making it promising for assisting in the defect engineering design of h-BN. Additionally, the model performs better in predicting Young’s modulus, which may be attributed to different coupling mechanisms between defects and tensile strength compared to Young’s modulus. It is possible to further improve the model's performance in predicting tensile strength by adjusting the network structure.

4.3 Thermodynamic properties

The thermodynamic properties of 2D materials are fundamental intrinsic characteristics for their applications. Thermal conductivity is a critical parameter in energy engineering and thermal management. In electronic and energy storage systems, materials with high thermal conductivity are required to enhance heat dissipation and curb overheating problems, aiming to reduce the demand for complex and expensive thermal management systems. Conversely, in thermal insulation and thermoelectric materials, materials with low thermal conductivity are more suitable for reducing energy loss and improving the efficiency of thermoelectric conversion. Hence, a precise evaluation of the thermal conductivity of 2D materials is a fundamental issue for matching target applications.

In 2018, Yang et al. [155] trained four supervised machine learning models to predict the thermal properties, including linear regression, polynomial regression, decision tree, and random forest, as well as four ANN models. For the ANN models, two of them possess one hidden layer, each containing 10 (ANN-10) and 20 neurons (ANN-20), respectively. The remaining two models feature two hidden layers, each containing 10 neurons (DNN-10-10) and 20 neurons (DNN-20-20) per layer. The trained model can predict the interfacial thermal resistance (R) between graphene and h-BN, given the system temperature, coupling strength, and tensile strain as input parameters. The prediction results of each model are shown in Fig.6(a)‒(h). The results indicate that ANN outperform most machine learning models, and all ANN perform significantly better than linear regression and polynomial regression models. Meanwhile, the prediction accuracy of ANN-10 and ANN-20 is comparable to that of the decision tree and random forest models. Among all the models, the two-layer deep neural network demonstrates the best performance, with the mean square error of 0.055 and 0.045 × 10−7 K∙m2∙W−1 with 10 and 20 neurons per layer, respectively. The minimum root mean square error in the machine learning model was 0.059 × 10−7 K∙m2∙W−1.

The high thermal conductivity of graphene limits the application of graphene in semiconductor devices. Previous works have shown that the thermal conductivity of porous graphene is much lower than that of pristine graphene. This phenomenon suggests that the spatial distribution and density of holes play a crucial role in reducing thermal conductivity. However, as the number of pores increases, the design complexity increases dramatically, making it difficult to determine the optimal distribution of holes to maximize or minimize the thermal conductivity. In 2020, Wan et al. [156] predicted the thermal conductivity of porous graphene based on the CNN. They subsequently applied this approach to efficiently search for the optimal porous graphene structure with the lowest thermal conductivity in reverse design. The investigation employed a porous graphene structure with dimensions 160 Å × 45 Å, as depicted in Fig.6(i). The central region of the structure is the porous area, with holes of about 8.8 Å in size. The clusters of atoms in blue represent the candidate sites for hole formation. The dataset consists of gray-scale images of porous graphene with dimensions 54 × 50, and the corresponding thermal conductivity was obtained by MD simulations. The coefficient of determination of the trained model on the test set is 0.96, and the root mean square error is 1.09 W/(mK). It is close to the results of MD simulations [root mean square error of 0.74 W/(mK)], indicating that the model can accurately predict the thermal conductivity of porous graphene.

In 2021, Liu et al. [157] proposed a deep neural network capable of quickly predicting the thermal conductivity of piled graphene with various geometric parameters and different sizes under external mechanical loading. The characterization of piled graphene is illustrated in Fig.6(j) and (k). First, piled graphene is projected onto the xy plane and uniformly discretized into several subregions, described using a 2D matrix. The total area of stacked graphene (β) is then divided by the number of graphene sheets (gij) contained within the discrete subregion to obtain the physical information pixel value (vij). Finally, the physical information for each pixel is represented by gray-scale images, referred to as fingerprints. These fingerprints, along with their corresponding thermal conductivity values obtained from MD simulations, serve as the training dataset. The final results demonstrate that the predictions of the model are consistent with the MD simulations. The determination coefficients of the training, validation, and test sets are 0.9787, 0.935, and 0.925, and the root mean square errors are 0.3220, 0.6319, and 0.6024 W/(mK), respectively. This result shows that the deep neural network trained without overfitting could predict the thermal conductivity of piled graphene with extremely high accuracy. Furthermore, they have constructed a comprehensive databank using piled graphene and corresponding thermal conductivities obtained from MD simulations and deep neural network predictions. This databank stores key geometric characteristics of piled graphene, such as the design domain size (l × w), the number of graphene sheets (ns), and the total area of graphene (Ag), as shown in Fig.6(l). This databank can be used to search for piled graphene structures and their corresponding thermal conductivities, guiding for designing them with specific thermal conductivities.

5 Deep learning in the design of 2D materials

Materials can be defined based on the type and number of constituent atoms, the stoichiometric or non-stoichiometric ratios of elements, as well as structural characteristics such as crystallography, nanostructure, and microstructure. Due to their varying atomic compositions, stoichiometries, and structures, diverse materials exhibit distinct properties in optics, mechanics, and electronics. Traditional material design is a forward mapping progress from material parameters to target properties. The forward design of new materials typically involves a series of steps, including molecular design, performance prediction, chemical synthesis, and experimental evaluation. These steps require a large number of experimentations and simulations, resulting in huge time and resource consumption, as well as high trial-and-error costs. High-throughput computational screening [158-162] overcomes these challenges by employing first-principles calculations [163]. It operates within a virtual chemical library constructed through combinatorial enumeration, facilitating the efficient screening of potential candidates for subsequent chemical synthesis. This approach greatly enhances the efficiency of material design and discovery. However, when tackling large-scale problems, the immense size of the chemical search space leads to exponential growth in time and computational costs. Moreover, constructing a virtual chemical library [164-167] heavily relies on the experience and intuition of materials scientists. Conversely, reverse design initiates from a material possessing specific desired functionalities and retraces backward to deduce its chemical composition and structure. This process, empowering the determination of optimal material design parameters from target property, is a reverse mapping from target performance to design parameters. In recent years, the increasing amount of experimental and simulation data has made data-driven deep learning the fourth paradigm of materials science, as shown in Fig.7(a) [168]. Notably, deep generative models have found widespread application in reverse design. They acquire material design knowledge and principles hidden within high-dimensional data, generating new materials with specific functionalities. This process is no longer dependent on the experience or intuition of researchers. Among these models, variational autoencoders (VAE), GAN, and reinforcement learning have achieved noteworthy advancement in molecular design. They facilitate the design of material structures based on the desired material performance. Below, we will present the latest advances in utilizing deep learning to design functional 2D materials.

In 2020, Dong et al. [169] introduced a reverse design framework utilizing regression and conditional generative adversarial networks (RCGAN) to generate hybrid 2D structures of graphene and h-BN with specific bandgap values, as depicted in Fig.7(b). The conventional GAN faces challenges in generating data using continuous and quantitative labels as inputs. The proposed RCGAN overcomes this limitation by incorporating supervised regression networks. This work employed DFT calculations to obtain bandgap values for various graphene and h-BN composite structures and established the training dataset by the calculated results. Taking a 4 × 4 supercell structure as an example, as shown in Fig.7(c) and (d), the findings indicate a favorable linear correlation between the desired and bandgap of the generated structure from the trained model, with a correlation factor of 0.87. 64% of the generated structures exhibit a relative error in bandgap within 10%, and the mean absolute error is 9.45%. These outcomes underscore the high prediction precision of the model. Additionally, this work also evaluated the diversity of generated structures. For 4 × 4 and 5 × 5 supercells, structures generated by the model that were equivalent to real structures in the training set accounted for only 12% and 0.5% of all generated structures, respectively. In the generated 6 × 6 supercell structures, there were no instances of equivalence with real structures. These findings suggest that RCGAN was successfully trained without common training issues such as mode collapse.

While VAE and GAN have made significant progress in reverse design, they are susceptible to encountering mode collapse issues during training, which may lead to potential failures. The intrinsic reversibility of invertible neural networks (INN) confers potential advantages in stability and performance, partly alleviating the problem of mode collapse encountered in VAE and GAN during training. In 2021, Fung et al. [170] introduced a reverse design framework, named Materials Design with Invertible Neural Networks (MatDesINNe) based on INN. This framework enables thorough and efficient sampling of the entire design space, facilitating both forward and reverse mappings between material parameters and target properties. As a result, it generates material candidates with the desired property. They applied this framework to the bandgap engineering design of MoS2. Within the design space containing applied strains and external electric fields, the framework generated new material candidates with high fidelity, accuracy, and diversity. The study characterized applied strain by assessing variations in equilibrium lattice constants (a, b, c, α, β, γ) and introduced an additional dimension to the design space, represented as the electric field perpendicular to the monolayer. For lattice parameters, a range of ±20% around the equilibrium points was sampled. Regarding the applied electric field, the sampling range extended from ‒1 V/Å to 1 V/Å. The entire design parameter space has been depicted in Fig.7(e). Sampling across the whole range of the design space generated a total of 11 000 samples for the dataset. For target bandgaps of 0 eV, 0.5 eV, and 1 eV, with the bandgap values computed by DFT as the true value, the study conducted tests on 10 000 samples. Then, the proposed model was compared with a mixture density network (MDN) and a conditional variational autoencoder (cVAE). The results indicate that for the case of a target bandgap of 0 eV, all models performed well, as the majority of samples had zero bandgaps. For non-zero target bandgaps of 0.5 eV and 1 eV, the mean absolute error values of MDN and cVAE significantly increased, reaching 0.421 eV, 0.461 eV, and 0.840 eV, 0.973 eV, respectively, signifying a marked decline in model performance. The mean absolute error values for the conditional invertible neural network were 0.219 eV and 0.193 eV, indicating reasonably good performance but falling short of the precision requirements. The best-performing model was the conditional neural network utilizing the MatDesINNe framework (MatDesINNe-cINN), with mean absolute error values of 0.013 eV and 0.015 eV. Subsequently, the authors validated the performance of the MatDesINNe-cINN model on 200 samples using DFT calculations. The experimental results for the target bandgap of 0.5 eV are depicted in Fig.7(f) and (g). For the three target bandgap values, the model exhibited a mean absolute error of approximately 0.1 eV, indicating that the model can generate samples with specific bandgaps with high accuracy.

On the other hand, Wan et al. [156] introduced a reverse design scheme based on CNN, which efficiently searches porous graphene structures with minimal thermal conductivity. They revealed the correlation between hole distribution and hole density, as well as the reduction of thermal conductivity in porous graphene. This approach only requires 1000 MD simulations to select the optimal solution from a million-design space. Compared with the MD simulation calculation force to search the entire design space to screen the optimal solution, the efficiency is greatly improved. The process of reverse design employing CNN is outlined as follows. Firstly, 100 structures are randomly chosen for training to establish the first generation of CNN. Subsequently, this network is employed to forecast the thermal conductivity of the remaining structures within the design space, screening the lowest thermal conductivity. These new structures, alongside their corresponding thermal conductivity values from MD calculations, are added to the training set. This expansion augments the training data and facilitates the training of the subsequent generation of CNN. Through iterative processes, graphene structures with the lowest thermal conductivity can be found. The comparison between the proposed scheme and a random search method shows that the random search method converges slowly, while the reverse design based on CNN converges more swiftly. It screens the top 100 structures with the lowest thermal conductivity in the 7th generation. With 24 potential sites and a porosity of 0.5, they further verify the performance of the model in a vast design space, containing 2 704 156 possible structures. Experimental results demonstrate that in the 8th generation’s training set, the mean thermal conductivity of the top 100 structures is 14.99 W·m−1·K−1. These works validate the model’s capacity to screen porous graphene structures with low thermal conductivity quickly.

2D materials have shown high activity in catalytic reactions. Numerous experimental and high-throughput computational studies have reported the use of 2D materials as hydrogen evolution reaction catalysts. However, due to the long experimental cycles and the massive cost of high-throughput calculations of adsorption energies, the rapid discovery of high-performance 2D hydrogen evolution reaction catalysts remains a significant challenge. In 2023, Wu et al. [171] utilized crystal graph convolutional neural networks (CGCNN) to screen high-performance 2D hydrogen evolution reaction catalysts from a 2D materials database. The trained model can discover high-performance 2D materials catalysts from the 3401 composite structures with different active sites in just a few hours, achieving a remarkable prediction accuracy of 95.2%. However, the average time (represented by the dotted line) for calculating the adsorption energy of the hydrogen evolution reaction using DFT is 94 528 seconds, as depicted in Fig.7(h). Therefore, the calculation time required for the total 3401 active sites using DFT is 94 528 × 3401 seconds ≈ 10.19 years. Clearly, the efficiency of the model is markedly enhanced in contrast to the decade-long duration required for DFT calculation, demonstrating the capability of CGCNN for efficiently discovering high-performance new structures over a large 2D materials space.

6 Summary and outlook

Over the past decade, research in the field of 2D materials has experienced significant growth, witnessing a growing abundance of new 2D materials and heterostructures. This development has far outpaced the processing capacities of conventional experimental and computational approaches. In recent years, the emergence of deep learning has brought unprecedented opportunities to investigate 2D materials. The training datasets can be collected from material databases, experimental results, and simulation computations [172]. The dataset is then used to train various deep learning models, establishing a mapping relationship between input features and target outputs. The trained models can perform structure characterization, property prediction, and reverse design for 2D materials. To a certain extent, the limitations of traditional experimental and computational methods are overcome, and the research efficiency of 2D materials is greatly improved.

This review discusses recent advancements in the application of deep learning for characterizing structure (defect identification, materials identification, and thickness characterization), predicting properties (electronic, mechanical, and thermodynamic properties), and inverse design in 2D materials. A summarized overview is shown in Tab.1. Deep learning models demonstrate the ability to accurately identify and quantitatively analyze doping and defects in 2D materials with single-atom precision. Integrating deep learning with material characterization techniques will expedite the high-precision characterization of 2D materials, enabling large-scale representation. Combining deep learning with theoretical calculations not only enables high-precision performance predictions at a fraction of the computational cost but also aids in exploring the performance of 2D materials under complex factors. Thus, integrating deep learning with experimental results becomes important for practical property predictions in future research. Furthermore, deep learning is essential for reverse design as it allows for the determination of optimal material design parameters based on the properties of the expected material. The design efficiency of 2D materials is improved by freeing from the constraints of traditional trial-and-error research. Several research efforts have demonstrated the feasibility of rapidly designing 2D materials with target properties. It is foreseeable that in future research, deep learning methods will continue to unleash its substantial potential, offering more support and assistance for the study and application of 2D materials. Nevertheless, it is also essential to be mindful of the challenges and limitations inherent to the deep learning method.

Firstly, preparing 2D materials in experiments necessitates a high level of expertise and complex instruments. On the other hand, computational simulations require expensive computing resources and a substantial time investment. Consequently, there is a problem of insufficient 2D materials data sets, which fail to comprehensively and objectively represent the characteristics of 2D materials [173]. This shortcoming may lead to potential challenges where the model struggles to learn crucial features, resulting in poor performance and issues such as underfitting, overfitting, and reduced generalization capability [174]. Transfer learning offers a solution to the issue of limited data. It involves the transference of knowledge learned from related tasks to new tasks. Through the fine-tuning of pre-trained model parameters acquired from training on large-scale datasets, the model can adapt to new datasets and tasks. Thus, transfer learning can improve the capacity of the model for generalization and diminish the risk of overfitting. Secondly, the quality of 2D materials data is another pivotal factor affecting model performance. On one hand, when creating datasets, it is necessary to consider diversity and representativeness. Simultaneously, repeated experiments or simulations should be conducted to ensure the reliability and precision of the data. On the other hand, establishing sizable, high-quality, openly accessible datasets can offer crucial support for deep learning research in the field of 2D materials. Aggregating and sharing a substantial volume of 2D material datasets can facilitate the training of more precise and efficient models. Furthermore, in the works discussed in this review, a majority of training datasets are generated via simulation calculations, which deviate somewhat from real physical experiments. Therefore, it is crucial to pay special attention to the reliability of the model when applied to actual data. Ultimately, the interpretability of the deep learning method still needs to be addressed. Deep learning methods are widely regarded as black-box algorithms. Their prediction results lack interpretability, and their internal mechanisms and logic are inscrutable. To some extent, this barrier limits the reliability and credibility of deep learning methods in practical applications. When designing deep learning models and loss functions, it is imperative to follow the fundamental principles of physics and chemistry. Incorporating more prior knowledge and constraints into the model can improve the reliability and interpretability. This incorporation enables the model to more accurately reflect the patterns and features of the real world, thereby attaining more theoretically reasonable predictions.

In the future, on the one hand, there is potential to integrate deep learning and robotics to construct automated systems for the efficient preparation of various types of 2D materials and complex heterogeneous structures. This development will enable intelligent synthesis of 2D materials and device design [175]. In the field of 2D materials, current studies are mostly still semi-autonomous, and a significant technical challenge is establishing a closed-loop autonomous materials experimentation process. The emergence of autonomous robotic scientists will substantially change the existing human-machine collaboration method. On the other hand, there is still huge room for exploration in the application of deep learning to the study of 2D materials. In the realm of reverse engineering 2D materials, it is worth further investigating the generation of materials with specific thermal conductivity and mechanical properties, in addition to the mentioned materials with desired bandgaps. Regarding property prediction, aside from electronic, mechanical, and thermal properties, other properties of 2D materials, such as optical properties, superconductivity, and toxicity, need to be further studied [174]. The integration of deep learning in the study of 2D materials serves to advance the field of 2D materials science, overcoming inherent limitations in traditional experimental and computational methods. While some progress has been made, there remain numerous challenges in the future.

References

[1]

K. S. Novoselov , A. K. Geim , S. V. Morozov , D. Jiang , Y. Zhang , S. V. Dubonos , I. V. Grigorieva , A. A. Firsov . Electric field effect in atomically thin carbon films. Science, 2004, 306(5696): 666

[2]

Q. Ma , G. Ren , K. Xu , J. Z. Ou . Tunable optical properties of 2D materials and their applications. Adv. Opt. Mater., 2021, 9(2): 2001313

[3]

X. L. Li , W. P. Han , J. B. Wu , X. F. Qiao , J. Zhang , P. H. Tan . Layer-number dependent optical properties of 2D materials and their application for thickness determination. Adv. Funct. Mater., 2017, 27(19): 1604468

[4]

T. Low , A. Chaves , J. D. Caldwell , A. Kumar , N. X. Fang , P. Avouris , T. F. Heinz , F. Guinea , L. Martin-Moreno , F. Koppens . Polaritons in layered two-dimensional materials. Nat. Mater., 2017, 16(2): 182

[5]

Y. Qin , M. Sayyad , A. R. P. Montblanch , M. S. G. Feuer , D. Dey , M. Blei , R. Sailus , D. M. Kara , Y. Shen , S. Yang , A. S. Botana , M. Atature , S. Tongay . Reaching the excitonic limit in 2D Janus monolayers by in situ deterministic growth. Adv. Mater., 2022, 34(6): 2106222

[6]

T. LaMountain , E. J. Lenferink , Y. J. Chen , T. K. Stanev , N. P. Stern . Environmental engineering of transition metal dichalcogenide optoelectronics. Front. Phys., 2018, 13(4): 138114

[7]

Y. Liu , C. Xiao , Z. Li , Y. Xie . Vacancy engineering for tuning electron and phonon structures of two-dimensional materials. Adv. Energy Mater., 2016, 6(23): 1600436

[8]

A. Kuc , T. Heine , A. Kis . Electronic properties of transition-metal dichalcogenides. MRS Bull., 2015, 40(7): 577

[9]

Q. H. Wang , K. Kalantar-Zadeh , A. Kis , J. N. Coleman , M. S. Strano . Electronics and optoelectronics of two-dimensional transition metal dichalcogenides. Nat. Nanotechnol., 2012, 7(11): 699

[10]

Y.Q. FangF.K. WangR.Q. WangT.ZhaiF.Huang, 2D NbOI2: A chiral semiconductor with highly in-plane anisotropic electrical and optical properties, Adv. Mater. 33(29), 2101505 (2021)

[11]

R. Yang , J. Fan , M. Sun . Transition metal dichalcogenides (TMDCs) heterostructures: Optoelectric properties. Front. Phys., 2022, 17(4): 43202

[12]

H. Song , J. Liu , B. Liu , J. Wu , H. M. Cheng , F. Kang . Two-dimensional materials for thermal management applications. Joule, 2018, 2(3): 442

[13]

Y. Wang , N. Xu , D. Li , J. Zhu . Thermal properties of two dimensional layered materials. Adv. Funct. Mater., 2017, 27(19): 1604134

[14]

L. Thiel , Z. Wang , M. A. Tschudin , D. Rohner , I. Gutiérrez-Lezama , N. Ubrig , M. Gibertini , E. Giannini , A. F. Morpurgo , P. Maletinsky . Probing magnetism in 2D materials at the nanoscale with single-spin microscopy. Science, 2019, 364(6444): 973

[15]

Y. Li , B. Yang , S. Xu , B. Huang , W. Duan . Emergent phenomena in magnetic two-dimensional materials and van der Waals heterostructures. ACS Appl. Electron. Mater., 2022, 4(7): 3278

[16]

M. Gibertini , M. Koperski , A. F. Morpurgo , K. S. Novoselov . Magnetic 2D materials and heterostructures. Nat. Nanotechnol., 2019, 14(5): 408

[17]

X. Li , M. Sun , C. Shan , Q. Chen , X. Wei . Mechanical properties of 2D materials studied by in situ microscopy techniques. Adv. Mater. Interfaces, 2018, 5(5): 1701246

[18]

H. Jiang , L. Zheng , Z. Liu , X. Wang . Two-dimensional materials: From mechanical properties to flexible mechanical sensors. InfoMat, 2020, 2(6): 1077

[19]

C. Fang , H. Wang , Z. Shen , H. Shen , S. Wang , J. Ma , J. Wang , H. Luo , D. Li . High-performance photodetectors based on lead-free 2D Ruddlesden–Popper perovskite/MoS2 heterostructures. ACS Appl. Mater. Interfaces, 2019, 11(8): 8419

[20]

H. Liu , X. Zhu , X. Sun , C. Zhu , W. Huang , X. Zhang , B. Zheng , Z. Zou , Z. Luo , X. Wang , D. Li , A. Pan . Self-powered broad-band photodetectors based on vertically stacked WSe2/Bi2Te3 p–n heterojunctions. ACS Nano, 2019, 13(11): 13573

[21]

M. Long , A. Gao , P. Wang , H. Xia , C. Ott , C. Pan , Y. Fu , E. Liu , X. Chen , W. Lu , T. Nilges , J. Xu , X. Wang , W. Hu , F. Miao . Room temperature high-detectivity mid-infrared photodetectors based on black arsenic phosphorus. Sci. Adv., 2017, 3(6): e1700589

[22]

S. Das , D. Pandey , J. Thomas , T. Roy . The role of graphene and other 2D materials in solar photovoltaics. Adv. Mater., 2019, 31(1): 1802722

[23]

A. Abnavi , R. Ahmadi , H. Ghanbari , M. Fawzy , A. Hasani , T. De Silva , A. M. Askar , M. R. Mohammadzadeh , F. Kabir , M. Whitwick , M. Beaudoin , S. K. O’Leary , M. M. Adachi . Flexible high-performance photovoltaic devices based on 2D MoS2 diodes with geometrically asymmetric contact areas. Adv. Funct. Mater., 2023, 33(7): 2210619

[24]

J. Sung , D. Shin , H. Cho , S. W. Lee , S. Park , Y. D. Kim , J. S. Moon , J. H. Kim , S. H. Gong . Room-temperature continuous-wave indirect-bandgap transition lasing in an ultra-thin WS2 disk. Nat. Photonics, 2022, 16(11): 792

[25]

C. Li , L. Zhao , Q. Shang , R. Wang , P. Bai , J. Zhang , Y. Gao , Q. Cao , Z. Wei , Q. Zhang . Room-temperature near-infrared excitonic lasing from mechanically exfoliated InSe microflake. ACS Nano, 2022, 16(1): 1477

[26]

J.GuB.ChakrabortyM.KhatoniarV.M. Menon, A room-temperature polariton light-emitting diode based on monolayer WS2, Nat. Nanotechnol. 14(11), 1024 (2019)

[27]

L. Zhao , Y. Jiang , C. Li , Y. Liang , Z. Wei , X. Wei , Q. Zhang . Probing anisotropic deformation and near-infrared emission tuning in thin-layered InSe crystal under high pressure. Nano Lett., 2023, 23(8): 3493

[28]

J. Wang , Y. J. Zhou , D. Xiang , S. J. Ng , K. Watanabe , T. Taniguchi , G. Eda . Polarized light-emitting diodes based on anisotropic excitons in few-layer ReS2. Adv. Mater., 2020, 32(32): 2001890

[29]

D. Jariwala , V. K. Sangwan , L. J. Lauhon , T. J. Marks , M. C. Hersam . Emerging device applications for semiconducting two-dimensional transition metal dichalcogenides. ACS Nano, 2014, 8(2): 1102

[30]

G. Fiori , F. Bonaccorso , G. Iannaccone , T. Palacios , D. Neumaier , A. Seabaugh , S. K. Banerjee , L. Colombo . Electronics based on two-dimensional materials. Nat. Nanotechnol., 2014, 9(10): 768

[31]

P. Kaushal , G. Khanna . The role of two-dimensional materials for electronic devices. Mater. Sci. Semicond. Process., 2022, 143: 106546

[32]

R. Cheng , S. Jiang , Y. Chen , Y. Liu , N. Weiss , H. C. Cheng , H. Wu , Y. Huang , X. Duan . Few-layer molybdenum disulfide transistors and circuits for high-speed flexible electronics. Nat. Commun., 2014, 5(1): 5143

[33]

M. Choi , S. R. Bae , L. Hu , A. T. Hoang , S. Y. Kim , J. H. Ahn . Full-color active-matrix organic light-emitting diode display on human skin based on a large-area MoS2 backplane. Sci. Adv., 2020, 6(28): eabb5898

[34]

B. Mukherjee , R. Hayakawa , K. Watanabe , T. Taniguchi , S. Nakaharai , Y. Wakayama . ReS2/h-BN/graphene heterostructure based multifunctional devices:Tunneling diodes, FETs, logic gates, and memory. Adv. Electron. Mater., 2021, 7(1): 2000925

[35]

M. Cheng , J. B. Yang , X. H. Li , H. Li , R. F. Du , J. P. Shi , J. He . Improving the device performances of two-dimensional semiconducting transition metal dichalcogenides: Three strategies. Front. Phys., 2022, 17(6): 63601

[36]

X. Hu , G. Wang , J. Li , J. Huang , Y. Liu , G. Zhong , J. Yuan , H. Zhan , Z. Wen . Significant contribution of single atomic Mn implanted in carbon nanosheets to high-performance sodium–ion hybrid capacitors. Energy Environ. Sci., 2021, 14(8): 4564

[37]

Z. Huang , H. Hou , Y. Zhang , C. Wang , X. Qiu , X. Ji . Layer-tunable phosphorene modulated by the cation insertion rate as a sodium-storage anode. Adv. Mater., 2017, 29(34): 1702372

[38]

X. Lu , Y. Shi , D. Tang , X. Lu , Z. Wang , N. Sakai , Y. Ebina , T. Taniguchi , R. Ma , T. Sasaki , C. Yan . Accelerated ionic and charge transfer through atomic interfacial electric fields for superior sodium storage. ACS Nano, 2022, 16(3): 4775

[39]

X. Li , M. Li , Z. Huang , G. Liang , Z. Chen , Q. Yang , Q. Huang , C. Zhi . Activating the I0/I+ redox couple in an aqueous I2–Zn battery to achieve a high voltage plateau. Energy Environ. Sci., 2021, 14(1): 407

[40]

Y. Zhang , J. Cao , Z. Yuan , L. Zhao , L. Wang , W. Han . Assembling Co3O4 nanoparticles into MXene with enhanced electrochemical performance for advanced asymmetric supercapacitors. J. Colloid Interface Sci., 2021, 599: 109

[41]

Y. K. Kim , K. Y. Shin . Functionalized phosphorene/polypyrrole hybrid nanomaterial by covalent bonding and its supercapacitor application. J. Ind. Eng. Chem., 2021, 94: 122

[42]

Q. Fu , Y. Meng , Z. Fang , Q. Hu , L. Xu , W. Gao , X. Huang , Q. Xue , Y. P. Sun , F. Lu . Boron nitride nanosheet-anchored Pd–Fe core–shell nanoparticles as highly efficient catalysts for suzuki–miyaura coupling reactions. ACS Appl. Mater. Interfaces, 2017, 9(3): 2469

[43]

H. H. Shin , E. Kang , H. Park , T. Han , C. H. Lee , D. K. Lim . Pd-nanodot decorated MoS2 nanosheets as a highly efficient photocatalyst for the visible-light-induced Suzuki–Miyaura coupling reaction. J. Mater. Chem. A, 2017, 5(47): 24965

[44]

C. Yao , N. Guo , S. Xi , C. Q. Xu , W. Liu , X. Zhao , J. Li , H. Fang , J. Su , Z. Chen , H. Yan , Z. Qiu , P. Lyu , C. Chen , H. Xu , X. Peng , X. Li , B. Liu , C. Su , S. J. Pennycook , C. J. Sun , J. Li , C. Zhang , Y. Du , J. Lu . Atomically-precise dopant-controlled single cluster catalysis for electrochemical nitrogen reduction. Nat. Commun., 2020, 11(1): 4389

[45]

Z. Luo , H. Zhang , Y. Yang , X. Wang , Y. Li , Z. Jin , Z. Jiang , C. Liu , W. Xing , J. Ge . Reactant friendly hydrogen evolution interface based on di-anionic MoS2 surface. Nat. Commun., 2020, 11(1): 1116

[46]

H. J. Li , K. Xi , W. Wang , S. Liu , G. R. Li , X. P. Gao . Quantitatively regulating defects of 2D tungsten selenide to enhance catalytic ability for polysulfide conversion in a lithium sulfur battery. Energy Storage Mater., 2022, 45: 1229

[47]

G.ZhangG.LiJ.WangH.TongJ.WangY.DuS.SunF.Dang, 2D SnSe cathode catalyst featuring an efficient facet-dependent selective Li2O2 growth/decomposition for Li-oxygen batteries, Adv. Energy Mater. 12(21), 2103910 (2022)

[48]

J. Hou , H. Wang , Z. Ge , T. Zuo , Q. Chen , X. Liu , S. Mou , C. Fan , Y. Xie , L. Wang . Treating acute kidney injury with antioxidative black phosphorus nanosheets. Nano Lett., 2020, 20(2): 1447

[49]

W. Chen , J. Ouyang , X. Yi , Y. Xu , C. Niu , W. Zhang , L. Wang , J. Sheng , L. Deng , Y. N. Liu , S. Guo . Black phosphorus nanosheets as a neuroprotective nanomedicine for neurodegenerative disorder therapy. Adv. Mater., 2018, 30(3): 1703458

[50]

D. Yim , D. E. Lee , Y. So , C. Choi , W. Son , K. Jang , C. S. Yang , J. H. Kim . Sustainable nanosheet antioxidants for sepsis therapy via scavenging intracellular reactive oxygen and nitrogen species. ACS Nano, 2020, 14(8): 10324

[51]

W.FengX.HanH.HuM.ChangL.DingH.XiangY.ChenY.Li, 2D vanadium carbide MXenzyme to alleviate ROS-mediated inflammatory and neurodegenerative diseases, Nat. Commun. 12(1), 2203 (2021)

[52]

M. Li , X. Peng , Y. Han , L. Fan , Z. Liu , Y. Guo . Ti3C2 MXenes with intrinsic peroxidase-like activity for label-free and colorimetric sensing of proteins. Microchem. J., 2021, 166: 106238

[53]

K. Rasool , M. Helal , A. Ali , C. E. Ren , Y. Gogotsi , K. A. Mahmoud . Antibacterial activity of Ti3C2Tx MXene. ACS Nano, 2016, 10(3): 3674

[54]

A. Arabi Shamsabadi , M. Sharifian Gh , B. Anasori , M. Soroush . Antimicrobial mode-of-action of colloidal Ti3C2Tx MXene nanosheets. ACS Sustain. Chem. & Eng., 2018, 6(12): 16586

[55]

R. Sha , T. K. Bhattacharyya . MoS2-based nanosensors in biomedical and environmental monitoring applications. Electrochim. Acta, 2020, 349: 136370

[56]

H. K. Choi , J. Park , O. H. Gwon , J. Y. Kim , S. J. Kang , H. R. Byun , B. K. Shin , S. G. Jang , H. S. Kim , Y. J. Yu . Gate-tuned gas molecule sensitivity of a two-dimensional semiconductor. ACS Appl. Mater. Interfaces, 2022, 14(20): 23617

[57]

S. P. Figerez , K. K. Tadi , K. R. Sahoo , R. Sharma , R. K. Biroju , A. Gigi , K. A. Anand , G. Kalita , T. N. Narayanan . Molybdenum disulfide–graphene van der Waals heterostructures as stable and sensitive electrochemical sensing platforms. Tungsten, 2020, 2(4): 411

[58]

R. Madhuvilakku , S. Alagar , R. Mariappan , S. Piraman . Glassy carbon electrodes modified with reduced graphene oxide-MoS2-poly (3, 4-ethylene dioxythiophene) nanocomposites for the non-enzymatic detection of nitrite in water and milk. Anal. Chim. Acta, 2020, 1093: 93

[59]

L. Wu , Q. Wang , B. Ruan , J. Zhu , Q. You , X. Dai , Y. Xiang . High-performance lossy-mode resonance sensor based on few-layer black phosphorus. J. Phys. Chem. C, 2018, 122(13): 7368

[60]

C.H. HuangT.T. HuangC.H. ChiangW.T. HuangY.T. Lin, A chemiresistive biosensor based on a layered graphene oxide/graphene composite for the sensitive and selective detection of circulating miRNA-21, Biosens. Bioelectron. 164, 112320 (2020)

[61]

S. Cui , H. Pu , S. A. Wells , Z. Wen , S. Mao , J. Chang , M. C. Hersam , J. Chen . Ultrahigh sensitivity and layer-dependent sensing performance of phosphorene-based gas sensors. Nat. Commun., 2015, 6(1): 8632

[62]

Q. Liang , Q. Wang , Q. Zhang , J. Wei , S. X. Lim , R. Zhu , J. Hu , W. Wei , C. Lee , C. H. Sow , W. Zhang , A. T. S. Wee . High-performance, room temperature, ultra-broadband photodetectors based on air-stable PdSe2. Adv. Mater., 2019, 31(24): 1807609

[63]

Y. Wang , L. Li , W. Yao , S. Song , J. T. Sun , J. Pan , X. Ren , C. Li , E. Okunishi , Y. Q. Wang , E. Wang , Y. Shao , Y. Y. Zhang , H. Yang , E. F. Schwier , H. Iwasawa , K. Shimada , M. Taniguchi , Z. Cheng , S. Zhou , S. Du , S. J. Pennycook , S. T. Pantelides , H. J. Gao . Monolayer PtSe2, a new semiconducting transition-metal-dichalcogenide, epitaxially grown by direct selenization of Pt. Nano Lett., 2015, 15(6): 4013

[64]

X. Yu , P. Yu , D. Wu , B. Singh , Q. Zeng , H. Lin , W. Zhou , J. Lin , K. Suenaga , Z. Liu , Q. J. Wang . Atomically thin noble metal dichalcogenide: A broadband mid-infrared semiconductor. Nat. Commun., 2018, 9(1): 1545

[65]

A. D. Oyedele , S. Yang , L. Liang , A. A. Puretzky , K. Wang , J. Zhang , P. Yu , P. R. Pudasaini , A. W. Ghosh , Z. Liu , C. M. Rouleau , B. G. Sumpter , M. F. Chisholm , W. Zhou , P. D. Rack , D. B. Geohegan , K. Xiao . PdSe2: Pentagonal two-dimensional layers with high air stability for electronics. J. Am. Chem. Soc., 2017, 139(40): 14090

[66]

Y. Gong , Z. Lin , Y. X. Chen , Q. Khan , C. Wang , B. Zhang , G. Nie , N. Xie , D. Li . Two-dimensional platinum diselenide: Synthesis, emerging applications, and future challenges. Nano-Micro Lett., 2020, 12(1): 174

[67]

Y. Wang , Y. Li , Z. Chen . Not your familiar two dimensional transition metal disulfide: structural and electronic properties of the PdS2 monolayer. J. Mater. Chem. C, 2015, 3(37): 9603

[68]

M.Ghorbani-AslA.KucP.MiroT.Heine, A single-material logical junction based on 2D crystal PdS2, Adv. Mater. 28(5), 853 (2016)

[69]

Y. Zhao , J. Qiao , P. Yu , Z. Hu , Z. Lin , S. P. Lau , Z. Liu , W. Ji , Y. Chai . Extraordinarily strong interlayer interaction in 2D layered PtS2. Adv. Mater., 2016, 28(12): 2399

[70]

X. Chia , A. Adriano , P. Lazar , Z. Sofer , J. Luxa , M. Pumera . Layered platinum dichalcogenides (PtS2, PtSe2, and PtTe2) electrocatalysis: Monotonic dependence on the chalcogen size. Adv. Funct. Mater., 2016, 26(24): 4306

[71]

Y. Wang , L. Zhou , M. Zhong , Y. Liu , S. Xiao , J. He . Two-dimensional noble transition-metal dichalcogenides for nanophotonics and optoelectronics: Status and prospects. Nano Res., 2022, 15(4): 3675

[72]

L. Pi , L. Li , K. Liu , Q. Zhang , H. Li , T. Zhai . Recent progress on 2D noble-transition-metal dichalcogenides. Adv. Funct. Mater., 2019, 29(51): 1904932

[73]

H. Zeng , Y. Wen , L. Yin , R. Q. Cheng , H. Wang , C. S. Liu , J. He . Recent developments in CVD growth and applications of 2D transition metal dichalcogenides. Front. Phys., 2023, 18(5): 53603

[74]

W. Wu , G. Qiu , Y. Wang , R. Wang , P. Ye . Tellurene: Its physical properties, scalable nanomanufacturing, and device applications. Chem. Soc. Rev., 2018, 47(19): 7203

[75]

Y. Wang , G. Qiu , R. Wang , S. Huang , Q. Wang , Y. Liu , Y. Du , W. A. III Goddard , M. J. Kim , X. Xu , P. D. Ye , W. Wu . Field-effect transistors made from solution-grown two-dimensional tellurene. Nat. Electron., 2018, 1(4): 228

[76]

Z. Xie , C. Xing , W. Huang , T. Fan , Z. Li , J. Zhao , Y. Xiang , Z. Guo , J. Li , Z. Yang , B. Dong , J. Qu , D. Fan , H. Zhang . Ultrathin 2D nonlayered tellurium nanosheets: Facile liquid-phase exfoliation, characterization, and photoresponse with high performance and enhanced stability. Adv. Funct. Mater., 2018, 28(16): 1705833

[77]

W. Gao , Z. Zheng , P. Wen , N. Huo , J. Li . Novel two-dimensional monoelemental and ternary materials: Growth, physics and application. Nanophotonics, 2020, 9(8): 2147

[78]

L. Xian , A. Pérez Paz , E. Bianco , P. M. Ajayan , A. Rubio . Square selenene and tellurene: Novel group VI elemental 2D materials with nontrivial topological properties. 2D Mater., 2017, 4(4): 041003

[79]

D. Ji , S. Cai , T. R. Paudel , H. Sun , C. Zhang , L. Han , Y. Wei , Y. Zang , M. Gu , Y. Zhang , W. Gao , H. Huyan , W. Guo , D. Wu , Z. Gu , E. Y. Tsymbal , P. Wang , Y. Nie , X. Pan . Freestanding crystalline oxide perovskites down to the monolayer limit. Nature, 2019, 570(7759): 87

[80]

Y. Zhang , H. H. Ma , X. Gan , Y. Hui , Y. Zhang , J. Su , M. Yang , Z. Hu , J. Xiao , X. Lu , J. Zhang , Y. Hao . Emergent midgap excitons in large-size freestanding 2D strongly correlated perovskite oxide films. Adv. Opt. Mater., 2021, 9(10): 2100025

[81]

Y. Lu , H. Zhang , Y. Wang , X. Zhu , W. Xiao , H. Xu , G. Li , Y. Li , D. Fan , H. Zeng , Z. Chen , X. Yang . Solar-driven interfacial evaporation accelerated electrocatalytic water splitting on 2D perovskite oxide/MXene heterostructure. Adv. Funct. Mater., 2023, 33(21): 2215061

[82]

K. Burke . Perspective on density functional theory. J. Chem. Phys., 2012, 136(15): 150901

[83]

N. Mounet , M. Gibertini , P. Schwaller , D. Campi , A. Merkys , A. Marrazzo , T. Sohier , I. E. Castelli , A. Cepellotti , G. Pizzi , N. Marzari . Two-dimensional materials from high-throughput computational exfoliation of experimentally known compounds. Nat. Nanotechnol., 2018, 13(3): 246

[84]

A. K. Geim , I. V. Grigorieva . Van der Waals heterostructures. Nature, 2013, 499(7459): 419

[85]

Y. Liu , N. O. Weiss , X. Duan , H. C. Cheng , Y. Huang , X. Duan . Van der Waals heterostructures and devices. Nat. Rev. Mater., 2016, 1(9): 16042

[86]

K.NovoselovA.MishchenkoA.CarvalhoA.H. Castro Neto, 2D materials and van der Waals heterostructures, Science 353(6298), aac9439 (2016)

[87]

A. Castellanos-Gomez , X. Duan , Z. Fei , H. R. Gutierrez , Y. Huang , X. Huang , J. Quereda , Q. Qian , E. Sutter , P. Sutter . Van der Waals heterostructures. Nat. Rev. Methods Primers, 2022, 2(1): 58

[88]

X. L. Fan , R. F. Xin , L. Li , B. Zhang , C. Li , X. L. Zhou , H. Z. Chen , H. Y. Zhang , F. P. Ouyang , Y. Zhou . Progress in the preparation and physical properties of two-dimensional Cr-based chalcogenide materials and heterojunctions. Front. Phys., 2023, 19(2): 23401

[89]

L. Deng , D. Yu . Deep learning: Methods and applications. Foundations and Trends in Signal Processing, 2014, 7(3-4): 197

[90]

E. Moen , D. Bannon , T. Kudo , W. Graf , M. Covert , D. Van Valen . Deep learning for cellular image analysis. Nat. Methods, 2019, 16(12): 1233

[91]

Y. LeCun , Y. Bengio , G. Hinton . Deep learning. Nature, 2015, 521(7553): 436

[92]

L.DengG.HintonB.Kingsbury, New types of deep neural network learning for speech recognition and related applications: An overview, in: Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, May 26‒31, 2013, 2013

[93]

D.W. OtterJ.R. MedinaJ.K. Kalita, A survey of the usages of deep learning for natural language processing, IEEE Trans. Neural Netw. Learn. Syst. 32(2), 604 (2021)

[94]

M.Z. AlomT.M. TahaC.YakopcicS.WestbergP.SidikeM.S. NasrinM.HasanB.C. Van EssenA.A. S. AwwalV.K. Asari, A state-of-the-art survey on deep learning theory and architectures, Electronics (Basel) 8(3), 292 (2019)

[95]

G. E. Hinton , R. R. Salakhutdinov . Reducing the dimensionality of data with neural networks. Science, 2006, 313(5786): 504

[96]

M. I. Jordan , T. M. Mitchell . Machine learning: Trends, perspectives, and prospects. Science, 2015, 349(6245): 255

[97]

W.S. McCullochW.Pitts, A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys. 5(4), 115 (1943)

[98]

F. Rosenblatt . The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev., 1958, 65(6): 386

[99]

D. E. Rumelhart , G. E. Hinton , R. J. Williams . Learning representations by back-propagating errors. Nature, 1986, 323(6088): 533

[100]

K. Fukushima . Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural Netw., 1988, 1(2): 119

[101]

Y. Lecun , L. Bottou , Y. Bengio , P. Haffner . Gradient-based learning applied to document recognition. Proc. IEEE, 1998, 86(11): 2278

[102]

I. Goodfellow , J. Pouget-Abadie , M. Mirza , B. Xu , D. Warde-Farley , S. Ozair , A. Courville , Y. Bengio . Generative adversarial networks. Commun. ACM, 2020, 63(11): 139

[103]

J.ChengY.YangX.Tang, ., Generative Adversarial Networks: A Literature Review, Trans. Internet Inf. Syst. (Seoul) 14(12) (2020)

[104]

O.RonnebergerP.FischerT.Brox, U-net: Convolutional networks for biomedical image segmentation, in: Proceedings of the Medical Image Computing and Computer-Assisted Intervention-MICCAI 2015: 18th International Conference, Munich, Germany, October 5‒9, 2015, Proceedings, Part III 18, Springer, 2015

[105]

H. Li , J. Wu , X. Huang , G. Lu , J. Yang , X. Lu , Q. Xiong , H. Zhang . Rapid and reliable thickness identification of two-dimensional nanosheets using optical microscopy. ACS Nano, 2013, 7(11): 10344

[106]

H. C. Wang , S. W. Huang , J. M. Yang , G. H. Wu , Y. P. Hsieh , S. W. Feng , M. K. Lee , C. T. Kuo . Large-area few-layered graphene film determination by multispectral imaging microscopy. Nanoscale, 2015, 7(19): 9033

[107]

Y. Li , N. Dong , S. Zhang , K. Wang , L. Zhang , J. Wang . Optical identification of layered MoS2 via the characteristic matrix method. Nanoscale, 2016, 8(2): 1210

[108]

J. Zhang , Y. Yu , P. Wang , C. Luo , X. Wu , Z. Sun , J. Wang , W. D. Hu , G. Shen . Characterization of atomic defects on the photoluminescence in two-dimensional materials using transmission electron microscope. InfoMat, 2019, 1(1): 85

[109]

W. Zhao , B. Xia , L. Lin , X. Xiao , P. Liu , X. Lin , H. Peng , Y. Zhu , R. Yu , P. Lei , J. Wang , L. Zhang , Y. Xu , M. Zhao , L. Peng , Q. Li , W. Duan , Z. Liu , S. Fan , K. Jiang . Low-energy transmission electron diffraction and imaging of large-area graphene. Sci. Adv., 2017, 3(9): e1603231

[110]

S. Yang . Scanning transmission electron microscopy (STEM) study on novel two-dimensional materials. Microsc. Microanal., 2020, 26(S2): 2372

[111]

S. de Graaf , B. J. Kooi . Radiation damage and defect dynamics in 2D WS2: A low-voltage scanning transmission electron microscopy study. 2D Mater., 2021, 9(1): 015009

[112]

S. Kim , D. Moon , B. R. Jeon , J. Yeon , X. Li , S. Kim . Accurate atomic-scale imaging of two-dimensional lattices using atomic force microscopy in ambient conditions. Nanomaterials (Basel), 2022, 12(9): 1542

[113]

D. S. Wastl , A. J. Weymouth , F. J. Giessibl . Atomically resolved graphitic surfaces in air by atomic force microscopy. ACS Nano, 2014, 8(5): 5233

[114]

Q. Tu , B. Lange , Z. Parlak , J. M. J. Lopes , V. Blum , S. Zauscher . Quantitative subsurface atomic structure fingerprint for 2D materials and heterostructures by first-principles-calibrated contact-resonance atomic force microscopy. ACS Nano, 2016, 10(7): 6491

[115]

C. Lee , H. Yan , L. E. Brus , T. F. Heinz , J. Hone , S. Ryu . Anomalous lattice vibrations of single- and few-layer MoS2. ACS Nano, 2010, 4(5): 2695

[116]

D. L. Silva , J. L. E. Campos , T. F. Fernandes , J. N. Rocha , L. R. P. Machado , E. M. Soares , D. R. Miquita , H. Miranda , C. Rabelo , O. P. Vilela Neto , A. Jorio , L. G. Cançado . Raman spectroscopy analysis of number of layers in mass-produced graphene flakes. Carbon, 2020, 161: 181

[117]

I. Stenger , L. Schué , M. Boukhicha , B. Berini , B. Plaçais , A. Loiseau , J. Barjon . Low frequency Raman spectroscopy of few-atomic-layer thick hBN crystals. 2D Mater., 2017, 4(3): 031003

[118]

Z. H. Ni , H. M. Wang , J. Kasim , H. M. Fan , T. Yu , Y. H. Wu , Y. P. Feng , Z. X. Shen . Graphene thickness determination using reflection and contrast spectroscopy. Nano Lett., 2007, 7(9): 2758

[119]

R. Frisenda , Y. Niu , P. Gant , A. J. Molina-Mendoza , R. Schmidt , R. Bratschitsch , J. Liu , L. Fu , D. Dumcenco , A. Kis , D. P. De Lara , A. Castellanos-Gomez . Micro-reflectance and transmittance spectroscopy: a versatile and powerful tool to characterize 2D materials. J. Phys. D Appl. Phys., 2017, 50(7): 074002

[120]

S. Y. Zeng , F. Li , C. Tan , L. Yang , Z. G. Wang . Defect repairing in two-dimensional transition metal dichalcogenides. Front. Phys., 2023, 18(5): 53604

[121]

M. Ziatdinov , O. Dyck , A. Maksov , X. Li , X. Sang , K. Xiao , R. R. Unocic , R. Vasudevan , S. Jesse , S. V. Kalinin . Deep learning of atomically resolved scanning transmission electron microscopy images: Chemical identification and tracking local transformations. ACS Nano, 2017, 11(12): 12742

[122]

J.MadsenP.LiuJ.KlingJ.B. WagnerT.W. HansenO.WintherJ.Schiøtz, A deep learning approach to identify local structures in atomic-resolution transmission electron microscopy images, Adv. Theory Simul. 1(8), 1800037 (2018)

[123]

A. Maksov , O. Dyck , K. Wang , K. Xiao , D. B. Geohegan , B. G. Sumpter , R. K. Vasudevan , S. Jesse , S. V. Kalinin , M. Ziatdinov . Deep learning analysis of defect and phase evolution during electron beam-induced transformations in WS2. npj Comput. Mater., 2019, 5(1): 12

[124]

D. H. Yang , Y. S. Chu , O. F. N. Okello , S. Y. Seo , G. Moon , K. H. Kim , M. H. Jo , D. Shin , T. Mizoguchi , S. Yang , S. Y. Choi . Full automation of point defect detection in transition metal dichalcogenides through a dual mode deep learning algorithm. Mater. Horiz., 2024, 11(3): 747

[125]

S. H. Yang , W. Choi , B. W. Cho , F. O. T. Agyapong-Fordjour , S. Park , S. J. Yun , H. J. Kim , Y. K. Han , Y. H. Lee , K. K. Kim , Y. M. Kim . Deep learning-assisted quantification of atomic dopants and defects in 2D materials. Adv. Sci. (Weinh.), 2021, 8(16): 2101099

[126]

C. H. Lee , A. Khan , D. Luo , T. P. Santos , C. Shi , B. E. Janicek , S. Kang , W. Zhu , N. A. Sobh , A. Schleife , B. K. Clark , P. Y. Huang . Deep learning enabled strain mapping of single-atom defects in two-dimensional transition metal dichalcogenides with sub-picometer precision. Nano Lett., 2020, 20(5): 3369

[127]

T.ChuL.ZhouB.ZhangF.Z. Xuan, Accurate atomic scanning transmission electron microscopy analysis enabled by deep learning, Nano Res., doi: 10.1007/s12274-023-6104-1 (2023)

[128]

B.WuL.WangZ.Gao, A two-dimensional material recognition image algorithm based on deep learning, in: Proceedings of the 2019 International Conference on Information Technology and Computer Application (ITCA), IEEE, 2019

[129]

Y. Saito , K. Shin , K. Terayama , S. Desai , M. Onga , Y. Nakagawa , Y. M. Itahashi , Y. Iwasa , M. Yamada , K. Tsuda . Deep-learning-based quality filtering of mechanically exfoliated 2D crystals. npj Computat. Mater., 2019, 5(1): 124

[130]

B. Han , Y. Lin , Y. Yang , N. Mao , W. Li , H. Wang , K. Yasuda , X. Wang , V. Fatemi , L. Zhou , J. I. J. Wang , Q. Ma , Y. Cao , D. Rodan-Legrain , Y. Q. Bie , E. Navarro-Moratalla , D. Klein , D. MacNeill , S. Wu , H. Kitadai , X. Ling , P. Jarillo-Herrero , J. Kong , J. Yin , T. Palacios . Deep-learning-enabled fast optical identification and characterization of 2D materials. Adv. Mater., 2020, 32(29): 2000953

[131]

S. Masubuchi , E. Watanabe , Y. Seo , S. Okazaki , T. Sasagawa , K. Watanabe , T. Taniguchi , T. Machida . Deep-learning-based image segmentation integrated with optical microscopy for automatically searching for two-dimensional materials. npj 2D Mater. Appl., 2020, 4(1): 3

[132]

T.Y. LinM.MaireS.Belongie, ., Microsoft coco: Common objects in context, in: Proceedings of the Computer Vision-ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6‒12, 2014, Proceedings, Part V 13, Springer, 2014

[133]

S. Mahjoubi , F. Ye , Y. Bao , W. Meng , X. Zhang . Identification and classification of exfoliated graphene flakes from microscopy images using a hierarchical deep convolutional neural network. Eng. Appl. Artif. Intell., 2023, 119: 105743

[134]

Y. Zhang , H. Zhang , S. Zhou , G. Liu , J. Zhu . Deep learning-based layer identification of 2D nanomaterials. Coatings, 2022, 12(10): 1551

[135]

H.ZhaoJ.ShiX.Qi, ., Pyramid scene parsing network, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017

[136]

X.QinZ.ZhangC.HuangM.DehghanO.R. ZaianeM.Jagersand, U2-net: Going deeper with nested U-structure for salient object detection, Pattern Recognit. 106, 107404 (2020)

[137]

X. Dong , Y. Zhang , H. Li , Y. Yan , J. Li , J. Song , K. Wang , M. Jakobi , A. K. Yetisen , A. W. Koch . Microscopic image deblurring by a generative adversarial network for 2D nanomaterials: Implications for wafer-scale semiconductor characterization. ACS Appl. Nano Mater., 2022, 5(9): 12855

[138]

L. Zhu , J. Tang , B. Li , T. Hou , Y. Zhu , J. Zhou , Z. Wang , X. Zhu , Z. Yao , X. Cui , K. Watanabe , T. Taniguchi , Y. Li , Z. V. Han , W. Zhou , Y. Huang , Z. Liu , J. C. Hone , Y. Hao . Artificial neuron networks enabled identification and characterizations of 2D materials and van der Waals heterostructures. ACS Nano, 2022, 16(2): 2721

[139]

X. Dong , H. Li , K. Wang , B. Menze , M. Jakobi , A. K. Yetisen , A. W. Koch . Multispectral microscopic multiplexed (3M) imaging of atomically-thin crystals using deep learning. Adv. Opt. Mater., 2024, 12(2): 2300860

[140]

G. A. Nemnes , T. L. Mitran , A. Manolescu . Gap prediction in hybrid graphene-hexagonal boron nitride nanoflakes using artificial neural networks. J. Nanomater., 2019, 2019: 6960787

[141]

Y. Dong , C. Wu , C. Zhang , Y. Liu , J. Cheng , J. Lin . Bandgap prediction by deep learning in configurationally hybridized graphene and boron nitride. npj Comput. Mater., 2019, 5(1): 26

[142]

C. Cortes , V. Vapnik . Support-vector networks. Mach. Learn., 1995, 20(3): 273

[143]

Y. Ma , S. Lu , Y. Zhang , T. Zhang , Q. Zhou , J. Wang . Accurate energy prediction of large-scale defective two-dimensional materials via deep learning. Appl. Phys. Lett., 2022, 120(21): 213103

[144]

M. Dewapriya , R. Rajapakse , W. Dias . Characterizing fracture stress of defective graphene samples using shallow and deep artificial neural networks. Carbon, 2020, 163: 425

[145]

Y. C. Hsu , C. H. Yu , M. J. Buehler . Using deep learning to predict fracture patterns in crystalline solids. Matter, 2020, 3(1): 197

[146]

A. J Lew , C. H. Yu , Y. C. Hsu , M. J. Buehler . Deep learning model to predict fracture mechanisms of grapheme. npj 2D Mater. Appl., 2021, 5(1): 48

[147]

T. Zhang , X. Li , S. Kadkhodaei , H. Gao . Flaw insensitive fracture in nanocrystalline graphene. Nano Lett., 2012, 12(9): 4605

[148]

C. H. Yu , C. Y. Wu , M. J. Buehler . Deep learning based design of porous graphene for enhanced mechanical resilience. Comput. Mater. Sci., 2022, 206: 111270

[149]

M.S. ElapoluM.I. R. ShishirA.Tabarraei, A novel approach for studying crack propagation in polycrystalline graphene using machine learning algorithms, Comput. Mater. Sci. 201, 110878 (2022)

[150]

M. S. Elapolu , A. Tabarraei . Mechanical and fracture properties of polycrystalline graphene with hydrogenated grain boundaries. J. Phys. Chem. C, 2021, 125(20): 11147

[151]

A. Shekhawat , R. O. Ritchie . Toughness and strength of nanocrystalline graphene. Nat. Commun., 2016, 7(1): 10546

[152]

M. I. R. Shishir , A. Tabarraei . Traction–separation laws of graphene grain boundaries. Phys. Chem. Chem. Phys., 2021, 23(26): 14284

[153]

M.I. R. ShishirM.S. R. ElapoluA.Tabarraei, A deep learning model for predicting mechanical properties of polycrystalline graphene, Comput. Mater. Sci. 218, 111924 (2023)

[154]

Y. Shen , S. Zhu . Machine learning mechanical properties of defect-engineered hexagonal boron nitride. Comput. Mater. Sci., 2023, 220: 112030

[155]

H. Yang , Z. Zhang , J. Zhang , X. C. Zeng . Machine learning and artificial neural network prediction of interfacial thermal resistance between graphene and hexagonal boron nitride. Nanoscale, 2018, 10(40): 19092

[156]

J. Wan , J. W. Jiang , H. S. Park . Machine learning-based design of porous graphene with low thermal conductivity. Carbon, 2020, 157: 262

[157]

Q. Liu , Y. Gao , B. Xu . Transferable, deep-learning-driven fast prediction and design of thermal transport in mechanically stretched graphene flakes. ACS Nano, 2021, 15(10): 16597

[158]

X. Zhang , A. Chen , Z. Zhou . High-throughput computational screening of layered and two-dimensional materials. Wiley Interdiscip. Rev. Comput. Mol. Sci., 2019, 9(1): e1385

[159]

V. Wang , G. Tang , Y. C. Liu , R. T. Wang , H. Mizuseki , Y. Kawazoe , J. Nara , W. T. Geng . High-throughput computational screening of two-dimensional semiconductors. J. Phys. Chem. Lett., 2022, 13(50): 11581

[160]

S. Sarikurt , T. Kocabaş , C. Sevik . High-throughput computational screening of 2D materials for thermoelectrics. J. Mater. Chem. A, 2020, 8(37): 19674

[161]

E. O. Pyzer-Knapp , C. Suh , R. Gómez-Bombarelli , J. Aguilera-Iparraguirre , A. Aspuru-Guzik . What is high-throughput virtual screening? A perspective from organic materials discovery. Annu. Rev. Mater. Res., 2015, 45(1): 195

[162]

X. Y. Ma , J. P. Lewis , Q. B. Yan , G. Su . Accelerated discovery of two-dimensional optoelectronic octahedral oxyhalides via high-throughput ab initio calculations and machine learning. J. Phys. Chem. Lett., 2019, 10(21): 6734

[163]

C. G. Van de Walle , J. Neugebauer . First-principles calculations for defects and impurities: Applications to III-nitrides. J. Appl. Phys., 2004, 95(8): 3851

[164]

B. K. Shoichet . Virtual screening of chemical libraries. Nature, 2004, 432(7019): 862

[165]

S. Ghosh , A. Nie , J. An , Z. Huang . Structure-based virtual screening of chemical libraries for drug discovery. Curr. Opin. Chem. Biol., 2006, 10(3): 194

[166]

M. Foscato , G. Occhipinti , V. Venkatraman , B. K. Alsberg , V. R. Jensen . Automated design of realistic organometallic molecules from fragments. J. Chem. Inf. Model., 2014, 54(3): 767

[167]

H. Mauser , M. Stahl . Chemical fragment spaces for de novo design. J. Chem. Inf. Model., 2007, 47(2): 318

[168]

G. R. Schleder , A. C. Padilha , C. M. Acosta , M. Costa , A. Fazzio . From DFT to machine learning: recent approaches to materials science – A review. J. Phys.: Mater., 2019, 2(3): 032001

[169]

Y. Dong , D. Li , C. Zhang , C. Wu , H. Wang , M. Xin , J. Cheng , J. Lin . Inverse design of two-dimensional graphene/h-BN hybrids by a regressional and conditional GAN. Carbon, 2020, 169: 9

[170]

V. Fung , J. Zhang , G. Hu , P. Ganesh , B. G. Sumpter . Inverse design of two-dimensional materials with invertible neural networks. npj Computat. Mater., 2021, 7(1): 200

[171]

S. Wu , Z. Wang , H. Zhang , J. Cai , J. Li . Deep learning accelerates the discovery of two-dimensional catalysts for hydrogen evolution reaction. Energy & Environm. Mater., 2023, 6(1): e12259

[172]

S. S. Chong , Y. S. Ng , H. Q. Wang , J. C. Zheng . Advances of machine learning in materials science: Ideas and techniques. Front. Phys., 2024, 19(1): 13501

[173]

B. Ryu , L. Wang , H. Pu , M. K. Y. Chan , J. Chen . Understanding, discovery, and synthesis of 2D materials enabled by machine learning. Chem. Soc. Rev., 2022, 51(6): 1899

[174]

H. Yin , Z. Sun , Z. Wang , D. Tang , C. H. Pang , X. Yu , A. S. Barnard , H. Zhao , Z. Yin . The data-intensive scientific revolution occurring where two-dimensional materials meet machine learning. Cell Rep. Phys. Sci., 2021, 2(7): 100482

[175]

Z.SiD.ZhouJ.YangX.Lin, 2D material property characterizations by machine-learning-assisted microscopies, Appl. Phys. A 129(4), 248 (2023)

RIGHTS & PERMISSIONS

The Authors

AI Summary AI Mindmap
PDF (18419KB)

2886

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/