Graph attention network for global search of atomic clusters: A case study of Agn (n = 14−26) clusters

Linwei Sai, Li Fu, Qiuying Du, Jijun Zhao

Front. Phys. ›› 2023, Vol. 18 ›› Issue (1) : 13306.

PDF(3636 KB)
Front. Phys. All Journals
PDF(3636 KB)
Front. Phys. ›› 2023, Vol. 18 ›› Issue (1) : 13306. DOI: 10.1007/s11467-022-1219-5
RESEARCH ARTICLE
RESEARCH ARTICLE

Graph attention network for global search of atomic clusters: A case study of Agn (n = 14−26) clusters

Author information +
History +

Abstract

Due to coexistence of huge number of structural isomers, global search for the ground-state structures of atomic clusters is a challenging issue. The difficulty also originates from the computational cost of ab initio methods for describing the potential energy surface. Recently, machine learning techniques have been widely utilized to accelerate materials discovery and molecular simulation. Compared to the commonly used artificial neural network, graph network is naturally suitable for clusters with flexible geometric environment of each atom. Herein we develop a cluster graph attention network (CGANet) by aggregating information of neighboring vertices and edges using attention mechanism, which can precisely predict the binding energy and force of silver clusters with root mean square error of 5.4 meV/atom and mean absolute error of 42.3 meV/Å, respectively. As a proof-of-concept, we have performed global optimization of medium-sized Agn clusters (n = 14−26) by combining CGANet and genetic algorithm. The reported ground-state structures for n = 14−21, have been successfully reproduced, while entirely new lowest-energy structures are obtained for n = 22−26. In addition to the description of potential energy surface, the CGANet is also applied to predict the electronic properties of clusters, such as HOMO energy and HOMO-LUMO gap. With accuracy comparable to ab initio methods and acceleration by at least two orders of magnitude, CGANet holds great promise in global search of lowest-energy structures of large clusters and inverse design of functional clusters.

Graphical abstract

Keywords

deep learning / graph attention network / potential surface fitting / Ag clusters / global search

Cite this article

Download citation ▾
Linwei Sai, Li Fu, Qiuying Du, Jijun Zhao. Graph attention network for global search of atomic clusters: A case study of Agn (n = 14−26) clusters. Front. Phys., 2023, 18(1): 13306 https://doi.org/10.1007/s11467-022-1219-5

1 Introduction

Computer simulation changes the traditional paradigm of painstaking “try and error” experiments and greatly accelerates the development of molecules and materials, which, however, usually relies on high-throughput ab initio calculations with considerable computational cost. Machine learning techniques, especially deep learning (DL), have been introduced to further improve the efficiency and applicability of computational chemistry and computational materials sciences [1-3]. Inspired by the remarkable progress in computer vision and natural language processing, DL has been successfully used for transition state searching [4], chemical reactivity prediction [5, 6], thermal properties prediction [7], catalyst design [8], spectroscopic prediction [9], solution of the many-electron Schrödinger equation [10], magnetic field estimation [11], molecular odor prediction [12], and so on.
Among various DL networks, graph neural network [13], which has been widely used in social network, internet of things, knowledge graph, and recommender system, is particularly suitable for clusters and molecules. Compared to the commonly used artificial neural network (ANN) [14-21] that treats each atom independently, graph network is a natural choice to describe the geometric characteristic of a cluster or molecule, since it contains the information of interconnection between atoms [22-32]. Moreover, graph network is less sensitive to the choice of atomic descriptors. To date, several graph neural networks, namely, SchNet [23], CGCNN [26], MEGNet [28], MGCN [29], DimNet [30], DGANN [31] and DeepMolNet [32] have been developed for molecules and crystals. For instance, SchNet [23] introduces the continuous-filter convolutions (cfconv) and utilizes element type and atomic coordinates as input. DimNet [30] uses spherical Bessel functions and spherical harmonics to replace the Gaussian radial basis representations. CGCNN [26] allows multiple edges in crystal graph due to periodicity which can predict eight different properties of crystals. DGANN [31] does not require the complete graph and directly uses the Cartesian coordinates instead of the interatomic distance as input.
Compared to molecules with well-defined geometry, the structure of atomic cluster evolves with size (i.e., number of atoms) and searching the ground-state structure requires global exploration on the potential energy surface (PES). Due to the coexistence of numerous isomers on the PES, DL techniques, including graph network, have been rarely applied to cluster science yet. For a cluster, not only the tiny difference between similar isomers would confuse the neural network, but also the flexible bonding configuration is disadvantageous to build network.
Different from many other fields that need node information only, application of graph network on molecules, clusters, or crystals must consider information of both atoms and bonds. As one of the three leading authorities in deep learning, attention mechanism is proposed for machine translation by Bengio [33]. The effect of neighboring bonds on a center atom is suited to model by attention mechanism. Qian et al. [31] devised a directed graph attention neural network (DGANN) that encodes the local chemical environment by graph attention mechanism. The DeepMoleNet [32] developed by Ma’s group adopts multilevel attention that can precisely predict 12 molecular properties integrating in one net. However, graph network associated with attention mechanism has not been adopted for structural optimization of clusters yet.
In this paper, we develop a cluster graph attention network (CGANet) to reproduce the binding energy, force, and molecular orbital energies of silver cluster from density functional theory (DFT) calculations. Combining with a home-made genetic algorithm code, we have performed global search for the ground state structures of Agn clusters with n = 14−26 and unveiled several unprecedented lowest-energy structures. This graph attention network, which is at least two orders of magnitude faster than DFT calculations, is a universal approach for structural optimization and property prediction of atomic clusters.

2 Network structure

2.1 Feature extraction

For an atomic cluster or molecule, element type, bond length and bond angle are the essential information that determines the physical and chemical properties of the system. We first extract this information from the element type and distance matrix as input. The element information should be transformed to a vector by one-hot encoding or embedding layer. For simplicity, herein we consider elemental clusters of silver atoms; thus, we do not have to distinguish the element type. The initial feature of atoms is a one-dimensional vector of constant 1. The Cartesian coordinates of atoms are not directly utilized as the input of network. Instead, we extract the structural information from Cartesian coordinates and put it into the network. The key operation of CGANet is schematically shown in Fig.1. We start from the distance matrix D because it is invariant under translation or rotation. For each pair of atom i and j, the distance dij in D matrix is transformed to a 36-dimensional vector eij0 by a Gaussian function [14]:
Fig.1 Feature extraction and graph convolution operations.

Full size|PPT slide

dij0=eξk(dijβk)2,k=1,2,,36.
Here ξk and βk are parameters to be trained. The bond angle θijk can be simply calculated from dij, dik and djk using the cosine law. Then we introduce a 24-dimensional angle vector aijk0 in a similar way:
aijk0=eηl(aijkαl)2,l=1,2,,24.
where ηl and αl are parameters to be trained.

2.2 Vertex convolutional layer

The classical graph convolutional operation is expressed as follows [34]:
Xl+1=σ(AXlW),
where Xl is the output of layer l, A is the adjacent matrix of graph, W is the weight matrix of this layer with size nl × nl+1. The product of XlW transforms the feature from dimension nl to dimension nl+1. Xl left multiplied by A would sum all neighbor feature to the center atom. At last, an activation function σ is used to achieve the output of layer l+1. However, this graph convolutional operation has two deficiencies. First, different neighboring atoms have different effect on the center atom; therefore, a simple summation of them is unreasonable. The second issue is how to properly aggregate the feature of center atom itself. Hence, we introduce an improved graph attention layer. Let Vl be the matrix of each atom’s feature of layer l, and h=VlW be a new feature matrix of another dimension. Then, we aggregate neighbor features with different weights, and the weights are calculated by the center feature hi, the neighbor feature hj and the vector dij [35]
αij=Softmax(LeakyReLU(αT[hil+1||hjl+1||dij])).
Here Softmax and LeakyReLU are activation functions, αij is attention coefficient, α is a vector to be trained. The Softmax function can normalize the attention coefficient. For each center atom i, the feature of neighboring atom j is aggregated by the following formula:
vil+1=Elu(hi+βjN(i)αijhj).
The aggregation includes two parts—the feature of atom i and the summation of features of all neighboring atoms j scaled with an attention coefficient αij weighted by coefficient β. At last, the aggregation goes through an activation function Elu to yield the next output feature Vl+1. Attention mechanism is permutation invariant because the attention coefficient αij permutes with atom j.

2.3 Edge convolutional layer

The l+1th layer edge feature is obtained from vertex features of the two ends and the edge feature itself in the previous lth layer:
el+1=Elu([eijl||vil||vjl]w).

2.4 Angle convolutional layer

Similar to the vertex convolutional layer, the new feature of edge eij is aggregated by the edge eij itself and all angle features aijk using attentional mechanism:
h=elw,
αijk=Softmax(LeakyReLU(αT[hij||hik||aijkl])),
eijl+1=Elu(hij+βkN(i)αijkhik).

2.5 Force convolutional layer

We combine the feature vectors vi, vj and eij, and map them into a single value as combination coefficient αij. Then, the force of atom i can be related to the linear combination of vector rij from atom i to atom j.
αij=αT[hi||hj||eij],fi=jαijrij.
Here rij′ is the unit vector of rij.

2.6 Energy prediction network

As shown in Fig.2(a), the energy prediction network takes distance matrix D as input. We use an edge convolutional layer to integrate the element information (embedding layer) and the distance feature extracted by distance matrix D. Then we integrate the edge information and angle information to an angle convolutional layer. Next, the angle convolutional layer goes through a batch normalization and combines with the element information to enter a vertex convolutional layer. The resulting vertex information flows through a batch normalization layer and combines with the previous edge information to yield a new edge convolutional layer. In the dense layer, the obtained vertex feature and edge feature are mapped to a single number of Ev and Ee, respectively. In the final global average pool layer as the readout function, the average binding energy per atom is predicted by
Fig.2 (a) Energy prediction network and (b) force prediction network.

Full size|PPT slide

E=(1+c1N+c2N2)E¯v+(1+c3N+c4N2)E¯e.
In fact, the binding energy of cluster usually increases and gradually approaches to the bulk limit as the size N increase, whose trend is similar to the curve of 1+c1/N+c2/N2. Here we introduce four empirical parameters c1, c2, c3 and c4 to reflect the overall trend of size dependence of binding energy, and N is the cluster size. By definition, a positive binding energy means formation of a cluster from individual atoms is exothermic. The parameters c1c4 are all negative, which successfully reflect the ascending tendency of binding energy as cluster size increases. Our test calculations show that inclusion of parameters for the size dependence of binding energy is beneficial for mixed-size train.

2.7 Force prediction network

In a previous study, an energy-conserving force model was obtained by differentiating the energy model with respect to the atomic position [23]. Due to the relation of energy and force, the model has to be at least twice differentiable to allow for gradient descent of the force loss, the resulting force model is twice as deep and, hence, requires about twice amount of computational time. Therefore, the energy and force networks are trained separately in this work, which can avoid the consumption of differential operation. The force prediction network is depicted in Fig.2(b). Generally, it resembles that of energy prediction but with little difference. Both networks share the same architecture until the vertex convolutional layer. Then we replace the last edge convolutional layer by a second vertex convolutional layer. Followed by a batch normalization layer, the force convolutional layer is connected as output. Moreover, similar CGANet architectures for energy, force and properties prediction with good performance can fully demonstrate the robustness of CGANet.

3 Train and test

3.1 Datasets

To demonstrate how to implement the CGANet, we consider medium-sized Agn clusters (n = 15−27) to construct datasets. First, huge number of structures for Ag16, Ag20 and Ag24 clusters are obtained from unbiased search on the PES of clusters using a home-developed genetic algorithm program (namely CGA [36]). For each cluster, we conduct five independent CGA searches up to 1000 iterations with symmetry constrain of C1, C2, C3, Cs and Ci point groups, respectively. All structures in CGA search are fully optimized with DFT, as implemented by the DMol3 program [37], which utilizes the double numerical basis including d-polarization function (DND) with a global orbital cutoff of 6.0 Å (see Fig. S1 of the Supporting Information), the Perdew−Burke−Ernzerhof (PBE) functional for exchange-correlation interaction [38], and all-electron relativistic potential to treat the core. All intermediate structures during geometry optimization are collected to constitute the train set, unless the binding energy of the structure is negative (which means that the structure is too unstable). Finally, the total size of dataset for Ag16, Ag20 and Ag24 is 166 689, 136 165 and 145 014, respectively. We have also collected the reported isomeric structures of Agn clusters (n = 15, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27) in literature [21, 39-45] and re-optimized them using the same PBE-DND scheme. Again, using the intermediate structures during geometry optimization, the resulting dataset contains 1825 cluster structures as the test dataset for mixed-size train.

3.2 Train

First, we train the CGANet on datasets of Ag16, Ag20 and Ag24 individually. For each cluster size, the data are split to train data and test data by 9:1. The loss function of force prediction net is mean absolute error (MAE) of force, while the loss function of energy prediction net is root mean square error (RMSE) of energy since RMSE is larger than MAE by definition and more sensitive to reflect the deviation between the data predicted by neural network and from DFT calculations. For comparison, we have trained different networks for energy prediction on dataset of Ag20. The ANN with two full connection layers after bond and angle feature extract layer has RMSE larger than 20 meV, while the graph convolutional network (GCN) also leads to test error larger than 20 meV. When we replace graph convolutional layer by graph attention layer, RMSE immediately decreases to 13 meV. Then, we improve the graph attention network by a series of testing. We introduce edge convolutional layer and angle convolutional layer to handle the original bond and angle information. Batch normalization layers are used to enhance the generalization ability and accelerate the training process. After testing various activation functions, the values of attention are gained by LeakyReLU function, the outputs of attention layer go through Elu function, and the activation function after full connection layer is chosen to be LeakyReLU. The numbers of initial bond and angle features are 36 and 24 respectively. The numbers of feature of first bond convolutional layer, angle convolutional layer, vertex convolutional layer and second bond convolutional layer are 18, 26, 30 and 32 respectively. It should be noticed that the number of these parameters is not the larger the better. At last, the test error for energy prediction net decreases to 6.8 meV and the architecture of CGANet is fixed. Using the same CGANet framework, the test losses of energy for Ag16 and Ag24 also approach about 6 meV/atom [Fig.3(a)] after training of 100−150 epochs. Similarly, the network of force prediction is constructed and the test losses of force are 42.3, 50.1 and 45.2 meV/Å for n = 16, 20 and 24, respectively [Fig.3(b)].
Fig.3 Root mean square error of energy (left) and mean absolute error of force (right) for Ag16, Ag20 and Ag24.

Full size|PPT slide

It is crucial for a net to be transferable to different sizes. Therefore, we combine the datasets of Ag16, Ag20 and Ag24 together as the train set and use the dataset of other sized Agn clusters (n = 15−27, excluding 16, 20, 24) as the test set for a mixed-size train. In this case, the test error of energy is obtained as 11.7 meV/atom (Fig.4), which is larger than those from single-size trains, but still lower than those of ANI (26.0 meV) [16] and SchNet (23.8 meV) [23]. Differently, the test loss of force in the mixed-size train does not increase compared to single-size trains, but slightly reduces to 41.7 meV/Å (Fig.4). This value is comparable to the reported RMSE values of 39 meV/Å in Cu−Zn binary cluster by HDNNP [46] and 43.4 meV/Å in organic molecules by GDML [47]. Overall speaking, the test errors of energy and force from mixed-size train are satisfactory and the trained CGANet should be able to describe the PES of Agn clusters of different sizes.
Fig.4 Root mean square error (RMSE) of energy (upper) and mean absolute error (MAE) of force (lower) for mixed-size train of Ag16, Ag20 and Ag24.

Full size|PPT slide

3.3 Structural prediction

Using the CGANet from mixed-sized train, we have performed global optimization of Agn clusters in size range of n = 14−26 within the framework of genetic algorithm, as implemented by our CGA code. Briefly speaking, 128 configurations are generated from scratch as the initial population for each sized Agn cluster. Then, any two individuals in this population are chosen as parents to produce a child cluster via a ‘‘cut and splice’’ crossover operation, followed by an optional mutation operation of 35% probability. Each child cluster from CGA iteration is locally relaxed by a combined technique of quick molecular dynamic simulations of 20 steps at 300 K and numerical minimization with BFGS algorithm. Our test calculations demonstrate that this technique is rather efficient to avoid being trapped in shallow local minima on the PES. For each cluster size, the independent CGA search based on CGANet lasts for about 1000 iterations.
The global minima structures of Agn (n = 14−26) are given in Fig.5. To double check our finding, we have re-performed structural relaxation using Gaussian16 suite [48], with a metaGGA TPSS exchange-correlation functional [49] and the SDD basis set. As demonstrated by our test calculations on spin multiplicity (Supporting Information, Fig. S2), we choose the possibly lowest spin states for all Agn clusters, i.e., singlet for even-sized clusters and doublet for odd-sized ones. As presented in Table S1 of the Supporting Information, our test calculations demonstrate that the TPSS/SDD scheme is able to describe the binding energy, bond length, ionization potential, and vibrational frequency of Ag2 dimer well. First of all, we have reproduced the reported ground-state structures of silver clusters at n = 14−21 from previous DFT studies [23, 41-47]. The ground state structure of Ag14 is prolate with C2 symmetry, in agreement with previous works [39, 41, 43]. The ground state structure of Ag15 is similar to Ag14 but with D3 symmetry, and it can be viewed as the same structure reported by Tian et al. [39]. We find that the lowest-energy configuration of Ag16 also takes prolate shape with D2 symmetry. This structure is the same as Baishya’s result [41] but different from the other reports [39, 40, 42-44]. The lowest-energy configuration of Ag17 is identical to that reported by Baishya et al. [41], which can be obtained by adding 4 atoms on the waist of Ag13 icosahedron. The present lowest-energy structure Ag18 takes a distort star shape with Cs symmetry, and it has been reported by Baishya and McKee [44]. For Ag19, We find a C3 structure that coincides with Yin and Du’s structure for Ag19 anion [45]. It is 0.13 eV and 0.11 eV lower than that reported by Chen [42] and Baishya [41], respectively, using DMol3 calculation. However, at level of TPSS/SDD, the structure reported by Chen [42] is more energetically favorable. Similar to Ag19, our lowest-energy structure of Ag20 is a twist tetrahedrons with C3 symmetry, which has been reported by Du [45] and McKee [44] respectively. The ground state structure of Ag21 found by us also takes twist tetrahedrons configuration and it can be considered as the same one in previous studies [39, 45].
Fig.5 Lowest-energy structures of Agn clusters (14 ≤ n ≤ 26). The point group symmetry is given in parenthesis. Red color highlights the newly found lowest-energy structures, while black color denote those the same as in literature [21, 39-45].

Full size|PPT slide

More excitingly, unprecedented lowest-energy structures of Agn clusters at n = 22−26 are obtained from our CGA search with CGANet. For n = 22−26, the lowest-energy structures from present work and those reported in recent studies [21, 39-45] are compared in Fig. S3 of the Supporting Information. The newly found structures energetically prevail those in literature by 43−960 meV and 70−702 meV at PBE/DND and TPSS/SDD level of theory, respectively. The ground state structure of Ag22 found here has C3 symmetry, which is different with all previous structures but similar with Ag23 structure reported by Chen [42]. It is also 0.35 eV lower than McKee’s structure [44] at TPSS/SDD level. All the lowest-energy structures of Ag23−25 find by us are 3-layer frustum with bottom face of 14 Ag atoms. We compare them with McKee’s structures [44] using TPSS/SDD calculations, and all the present structures are 0.2−1.2 eV lower in energy. Our lowest-energy structure of Ag26 adopts the configuration of a triangular bi-pyramid by removing 4 atoms at both two vertices and waist. This structure is 0.43 eV lower that the structure reported by Chen [42]. Based on the lowest-energy structures of Agn (n = 14−26), the vertical ionization potentials are calculated and with experimental data [50] in Fig. S4 of the Supporting Information. The satisfactory agreement demonstrates the validity of our theoretical scheme.
Compared to the conventional DFT calculations using DMol3, CGANet is faster by at least two orders of magnitude but with comparable accuracy (11.7 meV/atom RMSE in energy). Combined with genetic algorithm or other global minimization approach, it can provide robust description of the PES and allows more efficient search of the ground state structures. Our further investigation on large Agn clusters up to n = 60 using CGANet is still underway and the results will be reported elsewhere.

3.4 Properties prediction

In addition to energy and force, it is desirable to predict the physical or chemical properties of a cluster using the developed graph neural network. As key electronic properties, here we consider the gap between highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) as well as the HOMO energy (whose negative value is approximately the vertical ionization energy according to the Koopman’s theorem [51]). For either HOMO−LUMO gap or HOMO energy, the CGANet is the same as that for energy prediction described before, just by replacing the object function from energy to the desired property. Taking Ag20 as a representative, we train HOMO energy and HOMO−LUMO gap individually and the results are presented in Fig.6. Most intermediate structures during geometry optimization are kept as the dataset, while about 18% of them with unreasonable binding energy are discarded. After 90 epochs train, the MAE of HOMO−LUMO gap for test set reaches 45.8 meV (Fig.6), while the range of HOMO−LUMO gap values lies between 0.002 to 1.703 eV. For comparison, the MAE for HOMO−LUMO gap on QM9 dataset is 32.1 meV of DeepMoleNet and 34.8 meV of DimNet, respectively, and those for other models range between 56 to 69 meV. On the other hand, MAE of HOMO energy for test set is 50.0 meV, which is only about 1% of the absolute HOMO value (−4.217 eV in average). It is worthy to mention that the present CGANet is not limited to orbital energy, but can be extended to describe other properties such as polarizability, spectral characteristics, NMR chemical shift, magnetic moment, magnetic anisotropic energy, adsorption energy and chemical reactivity. The capability of directly predicting the chemical or physical property from atomic structure is essential for rapid screening of functional clusters or molecules.
Fig.6 Mean absolute error of HOMO energy (lower) and HOMO−LUMO gap (upper) trained for Ag20.

Full size|PPT slide

4 Conclusion

To summarize, a graph network for atomic clusters, namely CGANet, is built by aggregation layers of vertex, edge, bond angle and force information using attention mechanism. It takes only element type and coordinates as input and is able to predict the binding energy and force with satisfactory precision. Combining a genetic algorithm code for global search, CGANet can reproduce the DFT potential energy surface of Agn clusters with hundredfold acceleration. Despite our train set only includes data at selected sizes of n = 16, 20 and 24, it is successful for an extended size range of n = 14−26. Especially, a number of unprecedented lowest-energy structures have been obtained for Agn clusters at n = 22−26. Furthermore, CGANet can predict HOMO energy and HOMO−LUMO gap with MAE of about 50 meV, showing its capability of describing other physical and chemical properties. Although the specific way of constructing the aggregation layers of vertex, edge, bond angle and generating the dataset may vary with the system, the idea of using current graph attention network is universal and extendable to any atomic clusters and perhaps other materials. We anticipate that the developed CGANet can be widely used for rapidly searching the ground state structures of large clusters and reversal design of functional clusters with desired properties.

References

[1]
W. Gong, Q. Yan. Graph-based deep learning frameworks for molecules and solid-state materials. Comput. Mater. Sci., 2021, 195: 110332
CrossRef ADS Google scholar
[2]
P. Friederich, F. Hase, J. Proppe, A. Aspuru-Guzik. Machine-learned potentials for next-generation matter simulations. Nat. Mater., 2021, 20(6): 750
CrossRef ADS Google scholar
[3]
A. C. Mater, M. L. Coote. Deep learning in chemistry. J. Chem. Inf. Model., 2019, 59(6): 2545
CrossRef ADS Google scholar
[4]
L. Pattanaik, J. B. Ingraham, C. A. Grambow, W. H. Green. Generating transition states of isomerization reactions with deep learning. Phys. Chem. Chem. Phys., 2020, 22(41): 23618
CrossRef ADS Google scholar
[5]
C.ColeyW. JinL.RogersT.JamisonT.Jaakkola W.GreenR. BarzilayK.Jensen, A graph-convolutional neural network model for the prediction of chemical reactivity, Chem. Sci. (Camb.) 10(2), 370 (2019)
[6]
F.NikitinO. IsayevV.Strijov, DRACON: Disconnected graph neural network for atom mapping in chemical reactions, Phys. Chem. Chem. Phys. 22(45), 26478 (2020)
[7]
Y. Ouyang, C. Yu, G. Yan, J. Chen. Machine learning approach for the prediction and optimization of thermal transport properties. Front. Phys., 2021, 16(4): 43200
CrossRef ADS Google scholar
[8]
J. R. Kitchin. Machine learning in catalysis. Nat. Catal., 2018, 1(4): 230
CrossRef ADS Google scholar
[9]
C. McGill, M. Forsuelo, Y. Guan, W. H. Green. Predicting infrared spectra with message passing neural networks. J. Chem. Inf. Model., 2021, 61(6): 2594
CrossRef ADS Google scholar
[10]
D.PfauJ. S. SpencerA.G. D. G. MatthewsW.M. C. Foulkes, Ab initio solution of the many-electron Schrödinger equation with deep neural networks, Phys. Rev. Res. 2(3), 033429 (2020)
[11]
A. Khan, V. Ghorbanian, D. Lowther. Deep learning for magnetic field estimation. IEEE Trans. Magn., 2019, 55(6): 1
CrossRef ADS Google scholar
[12]
B.Sanchez-LengelingJ.N. WeiB.K. Lee R.C. GerkinA. Aspuru-GuzikA.B. Wiltschko, Machine learning for scent: Learning generalizable perceptual representations of small molecules, arXiv: 1910.10685 (2019)
[13]
J. Zhou, G. Cui, S. Hu, Z. Zhang, C. Yang, Z. Liu, L. Wang, C. Li, M. Sun. Graph neural networks: A review of methods and applications. AI Open, 2020, 1: 57
CrossRef ADS Google scholar
[14]
J. Behler, M. Parrinello. Generalized neural-network representation of high-dimensional potential-energy surfaces. Phys. Rev. Lett., 2007, 98(14): 146401
CrossRef ADS Google scholar
[15]
J. Behler. Atom-centered symmetry functions for constructing high-dimensional neural network potentials. J. Chem. Phys., 2011, 134(7): 074106
CrossRef ADS Google scholar
[16]
J.S. SmithO. IsayevA.E. Roitberg, ANI-1: An extensible neural network potential with DFT accuracy at force field computational cost, Chem. Sci. (Camb.) 8(4), 3192 (2017)
[17]
X.GaoF. RamezanghorbaniO.Isayev J.S. SmithA. E. RoitbergA.N. I. Torch, A free and open source PyTorch-based deep learning implementation of the ANI neural network potentials, J. Chem. Inf. Model. 60(7), 3408 (2020)
[18]
Z.L. GlickD. P. MetcalfA.KoutsoukasS.A. SpronkD.L. CheneyC.D. Sherrill, AP-Net: An atomic-pairwise neural network for smooth and transferable interaction potentials, J. Chem. Phys. 153(4), 044112 (2020)
[19]
R.LotF. PellegriniY.ShaiduE.Küçükbenli, PANNA: Properties from artificial neural network architectures, Comput. Phys. Commun. 256, 107402 (2020)
[20]
R. Modee, S. Laghuvarapu, U. D. Priyakumar. Benchmark study on deep neural network potentials for small organic molecules. J. Comput. Chem., 2022, 43(5): 308
CrossRef ADS Google scholar
[21]
L. Cao, P. Wang, L. Sai, J. Fu, X. Duan. Artificial neural network potential for gold clusters. Chin. Phys. B, 2020, 29(11): 117304
CrossRef ADS Google scholar
[22]
K. T. Schütt, F. Arbabzadah, S. Chmiela, K. R. Muller, A. Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nat. Commun., 2017, 8(1): 13890
CrossRef ADS Google scholar
[23]
K. T. Schutt, H. E. Sauceda, P. J. Kindermans, A. Tkatchenko, K. R. Muller. SchNet − A deep learning architecture for molecules and materials. J. Chem. Phys., 2018, 148(24): 241722
CrossRef ADS Google scholar
[24]
J.GilmerS. S. SchoenholzP.F. RileyO.VinyalsG.E. Dahl, Neural message passing for quantum chemistry, in: International Conference on Machine Learning (2017)
[25]
N. Lubbers, J. S. Smith, K. Barros. Hierarchical modeling of molecular energies using a deep neural network. J. Chem. Phys., 2018, 148(24): 241715
CrossRef ADS Google scholar
[26]
T. Xie, J. C. Grossman. Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. Phys. Rev. Lett., 2018, 120(14): 145301
CrossRef ADS Google scholar
[27]
O. T. Unke, M. Meuwly. PhysNet: A neural network for predicting energies, forces, dipole moments, and partial charges. J. Chem. Theory Comput., 2019, 15(6): 3678
CrossRef ADS Google scholar
[28]
C. Chen, W. Ye, Y. Zuo, C. Zheng, S. P. Ong. Graph networks as a universal machine learning framework for molecules and crystals. Chem. Mater., 2019, 31(9): 3564
CrossRef ADS Google scholar
[29]
C.LuQ.Liu C.WangZ. HuangP.LinL.He, Molecular property prediction: A multilevel quantum interactions modeling perspective, in: Association for the Advancement of Artificial Intelligence (2019)
[30]
J.KlicperaJ. GroßS.Günnemann, Directional message passing for molecular graphs, in: International Conference on Learning Representations (2020)
[31]
C. Qian, Y. Xiong, X. Chen. Directed graph attention neural network utilizing 3D coordinates for molecular property prediction. Comput. Mater. Sci., 2021, 200: 110761
CrossRef ADS Google scholar
[32]
Z. Liu, L. Lin, Q. Jia, Z. Cheng, Y. Jiang, Y. Guo, J. Ma. Transferable multilevel attention neural network for accurate prediction of quantum chemistry properties via multitask learning. J. Chem. Inf. Model., 2021, 61(3): 1066
CrossRef ADS Google scholar
[33]
D.BahdanauK. ChoY.Bengio, Neural machine translation by jointly learning to align and translate, in: International Conference on Learning Representations (2014)
[34]
T.N. KipfM. Welling, Semi-supervised classification with graph convolutional networks, arXiv: 1609.02907 (2016)
[35]
P.VeličkovićG.CucurullA.Casanova A.RomeroP. LioY.Bengio, Graph attention networks, arXiv: 1710.10903 (2017)
[36]
J. Zhao, R. Shi, L. Sai, X. Huang, Y. Su. Comprehensive genetic algorithm Forab initioglobal optimisation of clusters. Mol. Simul., 2016, 42(10): 809
CrossRef ADS Google scholar
[37]
B. Delley. From molecules to solids with the DMol3 approach. J. Chem. Phys., 2000, 113(18): 7756
CrossRef ADS Google scholar
[38]
J. P. Perdew, K. Burke, M. Ernzerhof. Generalized gradient approximation made simple. Phys. Rev. Lett., 1996, 77(18): 3865
CrossRef ADS Google scholar
[39]
D. Tian, H. Zhang, J. Zhao. Structure and structural evolution of Agn (n = 3–22) clusters using a genetic algorithm and density functional theory method. Solid State Commun., 2007, 144(3−4): 174
CrossRef ADS Google scholar
[40]
M. Harb, F. Rabilloud, D. Simon, A. Rydlo, S. Lecoultre, F. Conus, V. Rodrigues, C. Felix. Optical absorption of small silver clusters: Agn (n = 4−22). J. Chem. Phys., 2008, 129(19): 194108
CrossRef ADS Google scholar
[41]
K. Baishya, J. C. Idrobo, S. Öğüt, M. Yang, K. Jackson, J. Jellinek. Optical absorption spectra of intermediate-size silver clusters from first principles. Phys. Rev. B, 2008, 78(7): 075439
CrossRef ADS Google scholar
[42]
M. Chen, J. E. Dyer, K. Li, D. A. Dixon. Prediction of structures and atomization energies of small silver clusters, (Ag)n, n < 100. J. Phys. Chem. A, 2013, 117(34): 8298
CrossRef ADS Google scholar
[43]
M. Liao, J. D. Watts, M. Huang. Theoretical comparative study of oxygen adsorption on neutral and anionic Agn and Aun clusters (n = 2–25). J. Phys. Chem. C, 2014, 118(38): 21911
CrossRef ADS Google scholar
[44]
M. L. McKee, A. Samokhvalov. Density functional study of neutral and charged silver clusters Agn with n = 2−22: Evolution of properties and structure. J. Phys. Chem. A, 2017, 121(26): 5018
CrossRef ADS Google scholar
[45]
B. Yin, Q. Du, L. Geng, H. Zhang, Z. Luo, S. Zhou, J. Zhao. Superatomic signature and reactivity of silver clusters with oxygen: double magic Ag17 with geometric and electronic shell closure. CCS Chemistry, 2021, 3(12): 219
CrossRef ADS Google scholar
[46]
J. Weinreich, A. Römer, M. L. Paleico, J. Behler. Properties of α-brass nanoparticles (1): Neural network potential energy surface. J. Phys. Chem. C, 2020, 124(23): 12682
CrossRef ADS Google scholar
[47]
S. Chmiela, A. Tkatchenko, H. E. Sauceda, I. Poltavsky, K. T. Schütt, K. R. Müller. Machine learning of accurate energy-conserving molecular force fields. Sci. Adv., 2017, 3(5): e1603015
CrossRef ADS Google scholar
[48]
M. W. Schmidt, K. K. Baldridge, J. A. Boatz, S. T. Elbert, M. S. Gordon, J. H. Jensen, S. Koseki, N. Matsunaga, K. A. Nguyen, S. Su, T. L. Windus, M. Dupuis, J. A. Montgomery. General atomic and molecular electronic structure system. J. Comput. Chem., 1993, 14(11): 1347
CrossRef ADS Google scholar
[49]
J. Tao, J. P. Perdew, V. N. Staroverov, G. E. Scuseria. Climbing the density functional ladder: Nonempirical meta-generalized gradient approximation designed for molecules and solids. Phys. Rev. Lett., 2003, 91(14): 146401
CrossRef ADS Google scholar
[50]
G. Alameddin, J. Hunter, D. Cameron, M. M. Kappes. Electronic and geometric structure in silver clusters. Chem. Phys. Lett., 1992, 192(1): 122
CrossRef ADS Google scholar
[51]
T. Koopmans. Ordering of wave functions and eigenenergies to the individual electrons of an atom. Physica, 1933, 1: 104
CrossRef ADS Google scholar

Electronic supplementary material

Supplementary materials are available in the online version of this article at https://doi.org/10.1007/s11467-022-1219-5 and https://journal.hep.com.cn/fop/EN/10.1007/s11467-022-1219-5 and are accessible for authorized users. The details of the newly discovered lowest-energy structures and metastable structures of Agn clusters, including structural information and energy difference compared with the lowest-energy structures reported previously.

Notes

The authors declare no competing financial interests.

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant Nos. 11804076 and 91961204), the Fundamental Research Funds for the Central Universities of China (No. B210202151), and the Changzhou Science and Technology Plan (No. CZ520012712).

RIGHTS & PERMISSIONS

2023 Higher Education Press
AI Summary AI Mindmap
PDF(3636 KB)

Supplementary files

fop-21219-OF-sailinwei_suppl_1 (851 KB)

1226

Accesses

9

Citations

Detail

Sections
Recommended

/