1. National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China
2. College of Artificial Intelligence, Nanjing Tech University, Nanjing 211800, China
3. Purple Mountain Lab, Nanjing 211111, China
4. Frontiers Science Center for Mobile Information Communication and Security, Southeast University, Nanjing 210096, China
5. State Key Laboratory of Millimeter Waves, Southeast University, Nanjing 210096, China
zczhang@seu.edu.cn
yuxutao@seu.edu.cn
Show less
History+
Received
Accepted
Published
2025-06-26
2025-10-26
Issue Date
Revised Date
2025-11-19
PDF
(3575KB)
Abstract
The output prediction of quantum circuits is a formidably challenging task imperative in developing quantum devices. Motivated by the natural graph representation of quantum circuits, this paper proposes a Graph Neural Networks (GNNs)-based framework to predict the output expectation values of quantum circuits under noisy and noiseless conditions and compare the performance of different parameterized quantum circuits (PQCs). We construct datasets under noisy and noiseless conditions using a non-parameterized quantum gate set to predict circuit expectation values. The node feature vectors for GNNs are specifically designed to include noise information. In our simulations, we compare the prediction performance of GNNs in both noisy and noiseless conditions against Convolutional Neural Networks (CNNs) on the same dataset and their qubit scalability. GNNs demonstrate superior prediction accuracy across diverse conditions. Subsequently, we utilize the parameterized quantum gate set to construct noisy PQCs and compute the ground state energy of hydrogen molecules using the Variational Quantum Eigensolver (VQE). We propose two schemes: the Indirect Comparison scheme, which involves directly predicting the ground state energy and subsequently comparing circuit performances, and the Direct Comparison scheme, which directly predicts the relative performance of the two circuits. Simulation results indicate that the Direct Comparison scheme significantly outperforms the Indirect Comparison scheme by an average of 36.2% on the same dataset, providing a new and effective perspective for using GNNs to predict the overall properties of PQCs, specifically by focusing on their performance differences.
Quantum computing, as a new computational paradigm, has the potential to address problems beyond the capabilities of classical computing with greater efficiency and higher speed. It has demonstrated exponential or polynomial advantages in various domains, including combinatorial optimization [1-3], large-scale communication systems [4-7], molecular dynamics [8, 9], quantum chemistry [10-12], and machine learning [13-19]. Driven by breakthroughs in physical implementation technologies, quantum hardware has undergone rapid development over the past two decades, with numerous quantum computing systems now available [20-23]. Despite its promising prospects, quantum computing must navigate an extended era of Noisy Intermediate-Scale Quantum (NISQ) devices before entering the fault-tolerant quantum computing era [24-26]. During this stage, the limitations imposed by coherence times and quantum gate errors pose significant bottlenecks to achieving quantum advantage. Quantum devices currently encounter limitations in resource accessibility, such as runtime and qubit count, which do not match the convenience provided by classical computers. Additionally, classical simulation methods often suffer from inefficiencies. Consequently, some advancements must proceed without adequate benchmark data, significantly impeding the engineering of quantum devices. Against this backdrop, accurately predicting the output of quantum circuits is not only a theoretical challenge but also a crucial step toward optimizing quantum algorithms and enhancing hardware performance. However, due to the complex noise characteristics of quantum circuits and the computational challenges that grow with circuit size, existing classical simulation methods often struggle to provide efficient solutions.
Notably, as it can provide low-cost predictions of specific aspects of quantum circuit outputs once the model is trained, machine learning has recently offered an attractive alternative to direct classical simulation. For instance, in Ref. [27], a meticulously designed qubit-scalable Convolutional Neural Network (CNN) is employed to predict expectation values of circuits, delivering results that, under specific conditions, outperform those obtained from the freely available Noisy Intermediate-Scale Quantum (NISQ) devices. In Ref. [28], neural networks based on simple Multilayer Perceptrons (MLPs) are used to predict kernel-target alignment (KTA), a cost-effective proxy for quantum kernel classification accuracy, facilitating the identification of high-accuracy quantum kernel circuits. Despite these advancements, CNN-based and MLP-based approaches face limitations in capturing the structural information of quantum circuits, particularly for large-scale systems or circuits with complex topologies. Graph-based models, on the other hand, naturally leverage the directed acyclic graph (DAG) representation of quantum circuits, making them a promising alternative for output prediction. For instance, in Ref. [29], Graph Transformers are utilized to predict the Probability of Successful Trials (PST), which positively correlates with circuit fidelity, achieving remarkable performance. More recently, Ref. [30] proposed a general framework of machine learning for quantum error mitigation (ML-QEM), which employs various machine learning models including Linear Regression, Random Forest (RF), and Graph Neural Networks (GNNs) to regress the noisy expectation values obtained from quantum processors toward their approximated noiseless counterparts for error mitigation. Compared with traditional mitigation methods, ML-QEM can significantly reduce the overall cost of quantum error mitigation. Beyond these applications, estimating or predicting quantum circuit outputs still holds broad prospects. For example, circuit output predictors could be integrated into diffusion models, such as those discussed in Ref. [31], to guide the generation of high-performance circuits. Alternatively, these predictors could support Reinforcement Learning (RL) processes, as outlined in Ref. [32], by evaluating reward functions, thereby reducing resource consumption during training.
Based on the considerations outlined above, we aim to design a Graph Neural Networks (GNNs) predictor to predict the specific outputs of quantum circuits. Within this approach, the GNNs act as a surrogate model that enables large-scale and rapid performance prediction once trained, eliminating the need for repeated costly simulations or hardware executions. In addition to computational efficiency, GNNs also contribute to improving prediction accuracy, as they can effectively capture the intrinsic structural dependencies within quantum circuits. This advantage arises not only from their ability to represent circuit topology more faithfully but also from their capability to construct individual feature vectors for each node, allowing the embedding of various properties of quantum gates, such as gate type, gate error, and target qubits. In this work, we primarily predict the single-qubit expectation values, two-qubit expectation values, and the overall property of quantum circuits.
To begin with, we propose a GNN-based framework for predicting expectation values of quantum circuits, as illustrated in Fig. 1. These limited expectation values are sufficient for circuits that require only one or two possible output bitstrings, such as the Bernstein–Vazirani (BV) algorithm [33] and the Deutsch–Jozsa algorithm [34]. Utilizing a parameter-free quantum gate set, we randomly generate quantum circuits under both noiseless and noisy conditions. Classical simulations are conducted using the Qiskit library [35] to obtain single-qubit and two-qubit expectation values for our dataset. During the transformation of random circuits into graphs, we design feature vectors for each node to encapsulate properties such as gate type, target qubits, and noise information. Our experiments demonstrate that the GNNs achieve strong predictive performance on datasets with a relatively small circuit search space. For a fair comparison with the well-designed and representative Convolutional Neural Networks (CNNs)-based method [27], which possesses strong structured feature extraction capability, our GNNs-based method demonstrates superior performance, greater flexibility in handling input circuit structures, and improved scalability.
Furthermore, we extend our GNN-based prediction framework in two significant ways. First, we incorporate parameterized quantum gates, expanding its applicability to parameterized quantum circuits (PQCs). Second, instead of solely predicting expectation values, we shift our focus to evaluating the overall property of quantum circuits, specifically predicting the ground-state energy of the H molecule encoded on 4 qubits using a standard VQE workflow. In parallel, recent studies have employed deep neural networks to predict VQE circuit parameters directly from molecular geometries, thereby bypassing expensive variational optimization [36, 37]. Such ML-assisted VQE schemes have demonstrated accurate ground-state energy predictions, including for systems beyond H (e.g., LiH and BeH), reinforcing the relevance of ML for PQC-based quantum chemistry. To compare the performance of PQCs, we propose two schemes: Direct Comparison, which directly predicts the relative performance difference between circuits, and Indirect Comparison, which involves predicting absolute ground state energies before comparison. Our simulation results demonstrate that the Direct Comparison scheme significantly outperforms the Indirect Comparison scheme, offering an average improvement of 36.2% on the same dataset. These findings highlight the potential of GNNs in efficiently evaluating PQC performance, providing a novel and effective perspective for analyzing quantum circuits.
The rest of this paper is organized as follows: In Section 2, we introduce the background knowledge about quantum basics, quantum errors, and Graph Neural Networks. The frameworks for using GNNs to predict expectation values and compare circuit performance are detailed in Section 3. Section 4 analyzes the prediction results and extrapolation capabilities of the trained GNNs, comparing them with CNNs-based methods. It also presents simulations and analyses of both direct and indirect comparison schemes. Finally, Section 5 reports our conclusions and an outlook on future perspectives.
2 Background
2.1 Quantum basics
Quantum bits, or qubits, unlike classical bits, can exist in a linear superposition of the two basis states, 0 and 1, satisfying , where . This superposition property allows an -qubit system to represent a linear superposition of basis states. Quantum gates are used to perform computations on quantum systems by transforming quantum states from to . Each quantum gate corresponds to a unitary matrix [38], and common single-qubit gates include Hadamard gate (), phase gate (), gate () and Pauli rotation gates (, , ), which can respectively be denoted as the following unitary matrices,
The common two-qubit gate is the CNOT gate as follows:
2.2 Quantum errors
Quantum errors pose one of the major challenges for quantum computing in the NISQ era. On real quantum devices, errors arise from interactions between qubits and their environment, control errors, and environmental disturbances [39-41]. As quantum devices operate, qubits undergo coherence errors over time, while quantum gates introduce operational errors, such as coherent or stochastic errors. These errors significantly disrupt the functionality of quantum circuits and hinder their further optimization. To address these challenges, various noise mitigation techniques have been proposed to reduce the negative effects of quantum errors [17, 42-47].
2.3 Graph Neural Networks
Graph Neural Networks (GNNs) have emerged as a transformative approach for processing and learning from graph-structured data, demonstrating exceptional capabilities in understanding the complex relational structures inherent in such data [48]. By leveraging a message-passing mechanism that aggregates information from neighboring nodes while capturing the relational structure of the graph, GNNs provide a solid foundation for accurate predictions and informed decision-making [49]. In recent years, various GNN variants, such as Graph Convolutional Networks (GCNs), Graph Attention Networks (GATs), and GraphSAGE, have achieved groundbreaking performance across numerous deep learning tasks [50-54]. Quantum circuits, with their inherent connectivity relationships, can be naturally represented as graph structures, offering an intuitive pathway for applying GNNs in quantum computing.
3 Methods
In this section, we use GNNs to predict the outputs of two different classes of quantum circuits, where the universal gate sets that constitute the two classes of quantum circuits are different. In Section 3.1, we use GNNs to predict the expectation values of single-qubit and two-qubit under noisy and noiseless conditions. In Section 3.2, we extend the usage scenario of GNNs in Section 3.1 to realize the performance comparison of quantum circuits with different structures.
3.1 GNNs for expectation values prediction
Due to the limited quantum resources of the NISQ devices, it is a meaningful thing to predict the output of the circuit in advance before submitting it for execution. If the predicted output of the circuit is far below a threshold, running this circuit on a real quantum computation will not produce the desired results. To accurately predict circuit output, especially in the presence of noise, we propose a framework for predicting quantum circuits with GNNs, as shown in Fig. 1. This is the data-driven approach, and intuitively, estimating expectation values does not require computing the complete density matrix exactly. Therefore, data-driven approaches present the opportunity to provide sufficiently accurate predictions at a lower computational cost. The framework for predicting expectation values is divided into three primary stages. The first stage is to generate random quantum circuits with the assistance of a universal gate set and measure the circuit expectation values, which are used as training data for the GNNs. The second stage is to transform the generated random circuits into a graph structure and formulate feature vectors for each node that can contain device noise information. In the third stage, the graph is processed to predict circuit expectation values using GNNs.
This work has practical value as the framework realizes accurate prediction of the output expectation values of the circuits. When only one-bit string has non-zero measurement probability in the circuit, the single-qubit expectation values are sufficient to find it. When combined with two-qubit expectation values, it is enough to identify the output bit strings when only two of them have non-zero probability [27]. Some relevant quantum algorithms, such as the Deutsch–Jozsa algorithm [34], the quantum counting algorithm [38], and the Grover algorithm with two searched items [55], can assist in solving problems related to the prediction of the expectation values mentioned above. In the following, we will discuss the specific implementation details of the framework outlined in Fig. 1.
3.1.1 Random circuits generation
To realize the first stage in our framework, random quantum circuits are generated to provide a database for subsequent GNNs training. We select two single-qubit gates, namely the gate () and Hadamard gate (), and one two-qubit gate CNOT, as the universal gate set to generate random circuits. The universal gate set consisting of parameter-free gates is denoted . In the first step, gates are randomly selected from a universal gate set to create an initial version of the random circuit with qubits and layers of gates, which we define as the circuit depth. The integer corresponds to the number of qubits, and denotes the number of gates on each qubit, where two-qubit gates are considered as separate single-qubit operations acting on both qubits. To ensure circuits integrity, we set the initial state of all qubits to , and the circuits include measurement operations. After completing the circuit construction, an optimization strategy is used to eliminate duplicate gates to obtain circuits for later training. These optimization strategies include the equivalence of two consecutive H-gates to an Identity gate, the equivalence of eight consecutive T-gates to an Identity gate, and the elimination of two consecutive CNOTs, etc. The motivation for this step is to ensure that the circuits in our training dataset remain as compact as possible, so that the neural network can focus on learning meaningful structural differences rather than spending capacity on trivial equivalence transformations. The specific process of generating random circuits is illustrated in Fig. 2. The quantum gates enclosed by the red dashed box in Fig. 2(a) are the optimizable gates, and the optimized circuit is shown in Fig. 2(b). The number of randomly generated circuits in a single run, i.e., the training data size for the GNNs, is on the order of , .
3.1.2 Graph construction
Initially, we employ directed acyclic graphs (DAGs) to depict the topology of quantum circuits. Each node corresponds to a qubit, quantum gate, or measurement, while edges represent the time-dependent sequence of various gates. As depicted on the left side of Fig. 1, perform the transformation from the circuit to graph structure, with particular attention to representing the two-qubit CNOT using a single graph node.
3.1.3 Node features
We construct a feature vector for each node in the graph to represent certain attributes of the node. The features include index, node type, gate qubit, tag, the relaxation time () and dephasing time () of the target qubits, gate error, and readout error, as shown in Fig. 3. The length of the node feature vector is 31. The first part is the index of node, which is used for node sorting and differentiation. The subsequent three parts are one-hot encoded representations of the node type, gate qubit, and tag. Here, the node type indicates the gate type: initial input qubit, measurement, CNOT, H, or T. For example, signifies that the node represents a CNOT gate. The gate qubit is used to label the index of the qubit affected by the quantum gate, and its length can be expanded according to practical needs. The tag is effective when the gate is a CNOT, indicating the control qubit and the target qubit of the CNOT. One advantage of GNNs is their ability to consider hardware noise, primarily achieved through feature vectors. In this experiment, we utilize the last 7 numbers to characterize the noise properties of the hardware. The noise information is mainly referenced from IBM’s quantum devices, such as IBM Perth, IBM Lagos, and IBM Nairobi. The first 4 numbers respectively represent the and values for the first and second qubits. The 5th number denotes the gate error for the gate, while the last two numbers indicate readout errors. If a node does not possess a specific noise characteristic, it is set to 0. For example, for a measurement node, , , and gate error are all set to 0. For a node representing the CNOT, its complete feature vector is illustrated in Fig. 3.
3.1.4 Target values
After the circuit, the output state can be expressed as
where denotes the unitary operator corresponding to the circuit, and the tensor product is the initial state. We define as the Pauli operator acting on qubit , . The single-qubit expectation values can be expressed as
Since , the expectation value of can thus be calculated as
For convenience, we constrain the single-qubit expectation values to be between 0 and 1, with an expectation value of 0 for and an expectation value of 1 for . We scale the above formula accordingly:
The is the target value that denotes the expectation value of qubit . Similar to the definition method for single-qubit expectation values, we define the scaling form for two-qubit expectation values:
where , represents the two-qubit expectation values. Certainly, the description provided by the expectation values of single-qubit and two-qubit Pauli-Z measurements is limited. However, these pieces of information are sufficient for certain specific algorithms, such as the Deutsch–Jozsa algorithm [34]. For instance, in the Deutsch–Jozsa algorithm, the goal is to determine whether a Boolean function is constant or balanced. The algorithm measures the first qubits, and if the outcome corresponds to the bit string the function is constant; otherwise, it is balanced. In other words, predicting the expectation values of the single-qubit observables (for ) is already sufficient to infer the result, that is, if for all , the output bitstring must be , indicating a constant function. Conversely, the function is balanced. This motivation underlies our exploration of expectation values prediction.
3.1.5 Construction of GNNs
To process graphs with node features, we designed the four-layer Graph Neural Networks (GNNs), as illustrated in Fig. 4, and incorporated an attention mechanism between each layer. By leveraging the attention mechanism, the weights of each edge in the graph are dynamically determined, enabling the neural network to better capture information across the entire circuit. After feature extraction by the GNNs, features from key nodes (excluding input and measurement nodes) are selected and subjected to average pooling to obtain a 31-dimensional pooled vector. Meanwhile, the global circuit information, including the total number of gates, the number of CNOT gates, and the circuit depth, is fed into a two-layer fully connected network, where both the hidden and output layers have a dimensionality of 12. These global features are then concatenated with the pooled feature vector. Finally, this concatenated vector is processed through a three-layer fully connected network with hidden layer dimensions of 256, 128, and 64 to produce the final prediction result. The activation function between layers is implemented using the LeakyReLU function with a coefficient of 0.02.
3.2 GNNs for circuit performance comparison
In this chapter, we primarily expand upon the framework introduced in Section 3.1. Firstly, we replace the universal gate set for generating random quantum circuits, transitioning from the previous set to . The updated universal gate set allows for the generation of parameterized quantum circuits (PQCs), significantly broadening the applicability of quantum circuits. For example, this tool can be utilized in the search for optimal PQC structures. Secondly, we extend the values predicted by the GNNs, enabling them not only to predict the expectation values of qubits (including single-qubit and two-qubit expectation values) but also to predict the overall properties of a quantum circuit. Finally, we choose the Variational Quantum Eigensolver (VQE) algorithm as an extension application of our Section 3.1 architecture. We achieve this by using randomly generated PQCs and the optimized parameters to determine the ground state energy values of the molecule, which are then used as data to train the GNNs. The ground state energy of the molecule is optimized through the following formula [56] by adjusting the parameters of different circuits:
where denotes the Hamiltonian operator corresponding to the lowest eigenvalue, and denotes the parameterized ansatz, which also represents the PQC in the process of circuit design and implementation. By using the ground state energy values calculated through the Variational Quantum Eigensolver (VQE) as performance metrics for different circuits, and employing these energy values as data to train GNNs, we can predict the performance of randomly generated circuits. Specifically, we use the commonly adopted VQE optimizer COBYLA to optimize the circuit parameters. COBYLA is a gradient-free optimization method that does not require explicit computation of parameter gradients. The number of optimization iterations is set to 200, after which the optimized ground-state energy is obtained and used as the training target for the GNNs. It should be noted that the GNNs do not participate in the VQE optimization process. The VQE is solely employed to generate ground-state energy data for different circuits, which are used as labels for training the GNNs. It is worth noting that the predicted energy values are jointly affected by both the circuit structure and the optimization process. Although our model does not explicitly simulate the optimization process itself, it indirectly reflects the quality of circuit structures by learning the final energies of different circuits optimized under a fixed optimizer. This approach lays the foundation for selecting high-performance circuits. In the following two subsections, we will present this work. Section 3.2.1 utilizes the GNNs to predict the ground state energy of a molecule initially and compares the performance of PQCs based on the predicted ground state energy values. Section 3.2.2 builds upon the foundation of Section 3.2.1 by introducing an innovative approach: two different PQCs are simultaneously input into the GNN, which directly predicts the relative probability of their performance comparison.
3.2.1 Indirectly comparing circuit performance
Differing from the method of generating random circuits in Section 3.1.1, we use as the universal gate set. We randomly generate PQCs with three layers, each layer consisting of a layer, a CNOT layer, and a layer. A single layer of the parameterized quantum circuit (PQC) is randomly generated as shown in Fig. 5. To construct a three-layer PQC, the process shown in Fig. 5 is repeated three times, and the resulting three circuits are concatenated to form a complete quantum circuit. For the layer, each qubit has probability of having a gate, and similarly for the layer. For the CNOT layer, there is probability of adding a CNOT gate between each adjacent pair of qubits, including the case where the first and last qubits are considered adjacent and one of the qubits is randomly selected as the control qubit. The generated PQCs are then subjected to noise, and after multiple iterations of classical optimization algorithms, we obtain approximate ground state energy values. These ground state energy values serve as the predicted outputs for training. The node features and the construction of GNNs are consistent with Section 3.1.
We directly employ GNNs to predict the ground state energy values corresponding to different PQCs. These predicted values are then used as a benchmark for assessing the relative performance of different circuits.
3.2.2 Directly comparing circuit performance.
Differing from Section 3.2.1, where circuit performance is indirectly compared by predicting the ground state energy values corresponding to the optimization parameters, we propose a new approach. In this new approach, we simultaneously input the two circuits that need to be compared into the GNNs. This allows the GNNs to learn the structural differences between the two circuits and directly predict the performance comparison between them. The specific framework for predicting circuit performance comparison using GNNs is illustrated in Fig. 6.
During the graph construction process, two quantum circuits are merged into a single graph consisting of two disconnected subgraphs, as illustrated in the upper part of Fig. 6. In the four-layer Graph Neural Networks (GNNs), each of the first three layers performs not only node-wise message aggregation but also average pooling on the key nodes (excluding input and measurement nodes) within each subgraph. The pooled feature vectors from the two subgraphs are then subtracted to obtain the difference features. Differently from the previous layers, the final layer also performs average pooling on the key nodes of both subgraphs but directly concatenates the two pooled feature vectors instead of taking their difference, aiming to preserve more individual information from each subgraph after aggregation. The difference vectors obtained from the first three layers and the concatenated feature vector from the final layer are then concatenated to form the output representation of the GNNs. Meanwhile, global information, including the differences in the total number of quantum gates, CNOT gates, and circuit depths between the two circuits, is first passed through two fully connected layers. The resulting output is then concatenated with the GNNs output to form a unified feature vector, which is subsequently fed into three fully connected layers to produce the final prediction. During the training process, the noise model used is derived from IBM’s quantum device, IBM Lagos. Furthermore, the output of the GNNs is the probability that the first circuit outperforms the second circuit, ranging from 0 to 1. To summarize the distinction between the two comparison schemes, in the indirect comparison scheme, the GNNs predict the ground-state energy obtained from VQE optimization for each circuit, whereas in the direct comparison scheme, the inputs are two circuit graphs, and the output represents the probability that the first circuit performs better than the second one. The labels are constructed based on the energy difference between the two circuits after Min–Max normalization.
4 Evaluation
4.1 Evaluation methodology
Model and training setups. In the default setup, we use four-layer Graph Neural Networks. The embedding dimension is set to 31, corresponding to the feature vector dimension of 31. We employ two-head attention layers. By performing average pooling on key nodes, a 31-dimensional vector is generated as the aggregated feature for the circuit. If global features are incorporated into the GNN, we use two fully connected (FC) layers to expand the dimensions of the global features, with both the hidden and output dimensions set to 12. These global features are then concatenated with the pooled aggregated features. The concatenated features are further processed through three FC layers with hidden dimensions of 256, 128, and 64, respectively. The output dimension is set to 1 or 2, depending on the prediction target. We use the LeakyReLU activation function with a hyperparameter . During data preprocessing, Min−Max normalization is applied to the node features across the entire dataset. The model is trained for 200 epochs using the Adam optimizer with a constant learning rate of 0.01, a batch size of 512, and the mean squared error (MSE) loss function. Finally, during the evaluation phase, the coefficient of determination is used:
where is the output prediction associated with the ground truth target value , is the average of the target values, is the number of circuits in the test set, and is the number of outputs. It is worth emphasizing that quantifies accuracy in relation to the intrinsic variance of the test data. This metric is suitable for fair comparisons across circuits of different scales, as circuits of varying sizes may exhibit a tendency for output values to cluster around or near the mean to varying degrees.
Dataset setup. For the noisy and noiseless simulator datasets, we randomly generate 20 000 random circuits for each case. In addition to noise information derived from real quantum devices (IBM Perth, IBM Lagos, IBM Nairobi, and IBM Jakarta), we also define a noise model, referred to as Simulated Noise, in which gate errors, , , and readout errors are fixed at predefined values. The detailed parameters of the noise model are provided in Appendix A. During the extrapolation process for qubits, we randomly generate 40000 random circuits as the dataset.
4.2 Expectation values prediction
To explore the advantages of using GNNs for circuit output prediction, we conduct extensive experiments. These experiments included predicting the single-qubit output expectation values in both noisy and noiseless conditions, predicting the two-qubit output expectation values under the same conditions, and evaluating the extrapolation capability of GNNs in predicting circuit outputs in both noisy and noiseless environments.
4.2.1 Single-qubit expectation values
To evaluate the performance of our proposed Graph Neural Networks (GNNs)-based approach for predicting single-qubit expectation values of circuits, we randomly generate a dataset of 20 000 circuits for each combination of qubit number and circuit depth , where is set to 3, 4, and 5, and ranges from 5 to 11. Among these, 70% are used as the training set, 20% as the validation set, and the remaining 10% as the test set. The gate set used to construct the circuits is . When predicting single-qubit expectation values, we focus exclusively on the first qubit, corresponding to the rescaled expectation value defined in Eq. (7), where . It is worth noting that the same GNNs model can predict the single-qubit expectation values for any qubit by simply swapping the target qubit with the first qubit. Under noiseless conditions, the test results about values are summarized in Table 1. Under noisy conditions, the test results are presented in Table 2. In the noisy case, the noise model used in the simulation is derived from the IBM Perth, incorporating gate errors, , , and readout errors. From Table 1, it is evident that GNNs exhibit a significant advantage in predicting single-qubit expectation values. With a dataset of only 20 000 circuits, the GNNs maintain the values above 0.9 for circuits with 3 to 5 qubits and circuit depths ranging from 5 to 11, reaching up to 0.998. Additionally, the performance of GNNs in predicting single-qubit expectation values decreases as the circuit depth increases. This is due to the rapid growth of the space of possible circuits composed of quantum gates as the circuit depth increases, while the training dataset of 20 000 circuits remains relatively small compared to the vast search space. To highlight the advantage of using GNNs for predicting single-qubit expectation values affected by device noise, we compare the data from Table 1 and Table 2 and present the results in Fig. 7. Based on the data in Table 2 and Fig. 7, it can be observed that when noise information from quantum devices is included in the prediction process, the GNNs still demonstrate high accuracy in predicting the single-qubit expectation values of noisy quantum circuits. This is reflected in the values, which remain above 0.9 for circuits with 3 to 5 qubits and circuit depths ranging from 5 to 11, with a maximum value of 0.991. Compared to the noiseless case, the prediction performance of the GNNs under noisy conditions shows only a slight degradation. This indicates that the GNNs effectively learn to utilize the additional device noise information embedded in the node features during the training process, enabling more accurate predictions.
Furthermore, we compare our proposed method with an approach that uses Convolutional Neural Networks (CNNs) for single-qubit expectation values prediction [27]. The CNNs-based approach encodes quantum circuits into feature matrices using one-hot encoding, which are then fed into the networks. However, due to the requirement that CNNs need training data with identical feature dimensions, i.e., feature matrices of the same size, the CNNs-based approach cannot leverage optimization strategies for randomly generated quantum circuits with qubits and depth . In addition, unlike the GNNs-based approach, which can flexibly incorporate device noise information into the feature vectors of individual nodes, the CNNs-based approach requires explicitly designed input channels to represent such noise. Considering the large variety of noise types in quantum devices and the fact that our circuits are randomly structured and vary in size, constructing a unified multi-channel representation is nontrivial. Therefore, we fully adopt the network architecture proposed in Ref. [27] without designing additional channels dedicated to noise representation. To evaluate and compare the performance of the two methods under noiseless conditions, we randomly generate 20 000 quantum circuits using the gate set , where the number of qubits is set to 3, 4, and 5, and ranges from 5 to 11. The single-qubit expectation values of these circuits are predicted using both the GNNs-based and CNNs-based approaches and the coefficient of determination is calculated for each. The comparative results are shown in Fig. 8. As shown in Fig. 8, the overall prediction accuracy decreases as the circuit depth increases. This trend is consistent with the difficulty recently reported in Ref. [57], where it was observed that supervised learning becomes significantly more challenging for deeper and more highly parameterized quantum circuits. Nevertheless, Fig. 8 also clearly shows that the GNNs-based approach outperforms the CNNs-based approach, particularly as the circuit depth increases. This superior performance is primarily due to the fact that the topology of GNNs is derived directly from the connectivity of the quantum circuit, enabling it to better capture the interactions between quantum gates compared to the structure of CNNs. This advantage becomes increasingly pronounced as the circuit depth grows.
Unlike the CNNs-based approach, which requires the input feature matrix to have a fixed size, meaning that the training quantum circuits must have the same number of qubits and circuit depth , the topology of GNNs can adapt to the structural variations of quantum circuits. This flexibility allows GNNs to be trained on quantum circuits with varying depths and even different numbers of qubits . In the following, we conduct an in-depth investigation of this characteristic of GNNs.
First, we generate 20 000 quantum circuits with a fixed number of qubits and circuit depths randomly distributed between 5 and 11, using the gate set , under both noiseless and noisy conditions. The single-qubit expectation values are predicted using GNNs. Among these, 100 circuits are selected as the test set to visualize scatter plots of the predicted versus actual values and to calculate the coefficient of determination . The prediction results under noiseless conditions are illustrated in Fig. 9(a), while those under noisy conditions, where the noise model is derived from IBM Perth, are presented in Fig. 9(b). The red dashed line in both panels represents the line. From the scatter plot of single-qubit expectation values under noiseless conditions, we observe that the actual single-qubit expectation values primarily align with discrete values due to the gate set used to generate random circuits. Under noisy conditions, however, the distribution of actual single-qubit expectation values becomes more scattered as a result of noise effects. Nevertheless, since the GNNs incorporate circuit noise information during prediction, the values for predictions under noisy conditions exhibits only a negligible decline compared to the noiseless case.
Subsequently, to further our investigation, we lift the restriction on a fixed number of qubits during circuit generation. Random quantum circuits are generated with different numbers of qubits and circuit depths . We simulate the performance of GNNs in predicting single-qubit expectation values under noiseless conditions, Simulated Noise, and noise derived from quantum devices, including IBM Perth, IBM Lagos, IBM Nairobi, and IBM Jakarta. In all cases, 20 000 random circuits with depths ranging from 5 to 11 are generated from the quantum gate set to construct the dataset. Since IBM Perth, IBM Lagos, IBM Nairobi, and IBM Jakarta are 7-qubit quantum devices, simulations for these scenarios are conducted for circuits with up to 7 qubits. For noiseless and Simulated Noise scenarios, the simulations extended up to 16 qubits. Detailed comparative results about values are provided in Table 3 and illustrated in Fig. 10. From Table 3 and Fig. 10, we observe a slight decrease in values as the range of selectable qubits increases. However, considering the exponential growth in the potential search space of circuits brought by this expanded qubit range, this decrease is acceptable. Moreover, the GNNs achieve an value of 0.92 or higher across all conditions.
The above investigation into the characteristics of GNNs demonstrates that GNNs can provide reliable prediction of single-qubit expectation values for datasets containing quantum circuits with varying circuit depths and even different numbers of qubits. This significantly simplifies dataset construction, as it removes the constraint of requiring fixed input feature matrix dimensions, as is necessary for the CNNs-based approach. Furthermore, when the noise information affecting the circuits is incorporated into the feature vectors of the GNNs’ nodes, the GNNs achieve performance that is only marginally lower and sometimes nearly equivalent to that under noiseless conditions.
4.2.2 Model extrapolation for GNNs
Since GNNs can take quantum circuits with varying numbers of qubits as input, we partially investigate this characteristic in Section 4.2.1. In this subsection, we further explore the extrapolation capability of GNNs, referring to their ability to predict the properties of quantum circuits with more qubits than those in the training dataset using the same network parameters. The single-qubit expectation values are set as the prediction targets for GNNs. For each simulation, we randomly generate 40 000 circuits with depths ranging from 5 to 11 from the quantum gate set , serving as the dataset. During training, for each circuit, the “Gate Qubit” part of the node feature vectors is randomly assigned the required qubits, ensuring that all qubits indicated by the “Gate Qubit” part are trained. We simulate the extrapolation performance of GNNs trained on datasets of 3-qubit, 5-qubit, and 7-qubit circuits, respectively. The results comparing the extrapolation performance of these GNNs models to circuits with 7, 11, and 16 qubits under noiseless and Simulated Noise conditions are shown in Fig. 11(a). The extrapolation performance of GNNs compared to CNNs under noiseless conditions is shown in Fig. 11(b), and represents the number of qubits in the circuits of the training set. From the comparison in Fig. 11, it can be observed that the extrapolation performance of the GNNs model under noisy conditions is close to its performance under noiseless conditions, and both outperform the extrapolation performance of the CNNs model under noiseless conditions, particularly for 3-qubit and 5-qubit extrapolations. This advantage can plausibly be attributed to the ability of the GNNs model to better capture the connectivity between quantum gates, which facilitates improved extrapolation performance.
4.2.3 Two-qubit expectation values
In addition to predicting single-qubit expectation values, GNNs can also be trained to predict two-qubit expectation values. We focus on the first two qubits, corresponding to the rescaled expectation value defined in Eq. (8). Notably, the same GNNs model can predict the two-qubit expectation value for any pair of qubits by swapping the target qubits with the first two qubits. We randomly generate 20 000 circuits with depths ranging from 5 to 11 as the dataset and used GNNs to predict two-qubit expectation values under both noiseless conditions and noisy conditions with noise derived from IBM Perth. The results, including the coefficient of determination , are presented in Fig. 12. Compared to the prediction of single-qubit expectation values shown in Fig. 9, GNNs demonstrate comparable performance in predicting two-qubit expectation values, achieving consistently reliable results under both noiseless and noisy conditions. This observation further motivates the exploration of broader applications for GNNs.
4.3 Circuit performance comparison
To extend the applicability of GNNs in PQCs, which have wide applications in the NISQ era, we investigate the capability of GNNs to predict the overall properties of a quantum circuit, specifically the ground state energy of a 4-qubit hydrogen molecule H. Following the approach illustrated in Fig. 5, we construct PQCs using the gate set . Subsequently, the ground state energy of H is calculated using the Variational Quantum Eigensolver (VQE) algorithm, during which the parameters in the quantum circuit are iteratively optimized until convergence. The computed ground state energy of H is then treated as the prediction target for GNNs and used as a performance evaluation metric for the PQCs. Based on this metric, we implement two comparative schemes for circuits with different structures. The first approach, referred to as Indirect Comparison, evaluates the performance of two quantum circuits by predicting their respective metrics separately and comparing them. We also propose a new approach called Direct Comparison, wherein GNNs directly predict the probability that one PQC outperforms another. In this scheme, the input data consists of both circuits encoded as graph structures. To obtain the labels, Min-Max normalization is first applied to the ground state energies calculated for all PQCs. The normalized ground state energy values of the two circuits are then differenced, divided by 2, and shifted by 0.5 to generate the labels for the circuit pairs. In the simulation experiments, we generate datasets of 800, 1000, 5000, and 10 000 noisy PQCs, with their corresponding ground state energies calculated via the VQE algorithm. Two GNNs are trained to perform the predictions required for the Indirect Comparison and Direct Comparison schemes, respectively.
Before comparing the prediction accuracy between the Direct and Indirect Comparison schemes, we first evaluate the performance of the same set of quantum circuits through 100 independent VQE experiments. Specifically, three evaluation methods are considered: (i) Direct Comparison, where GNNs predict the probability that one PQC outperforms another; (ii) Indirect Comparison, where the performance of two circuits is compared based on their predicted metrics; and (iii) Calculation Comparison, where both circuits are directly optimized using the VQE algorithm with the COBYLA optimizer for 200 iterations to obtain their ground-state energies. The mean and standard deviation of the results over 100 trials are summarized in Table 4 and illustrated in Fig. 13, where the vertical axis is plotted on a logarithmic scale. As shown in Table 4, the Direct Comparison and Indirect Comparison schemes using GNNs as predictors achieve and speedups, respectively, compared to the Calculation Comparison method based on direct VQE computation. These results demonstrate the potential of the trained GNNs to perform large-scale and rapid performance evaluation of PQCs, providing an efficient alternative to costly VQE simulations.
Having established the runtime advantage of the GNN-based approaches, we next assess their prediction accuracy. The prediction results are compared with the actual outcomes to evaluate the accuracy of the performance comparisons between different circuit structures. Detailed simulation results are presented in Table 5 and Fig. 14, where represents the amount of data used for training. From Table 5 and Fig. 14, we observe that the Direct Comparison scheme, which directly predicts the relative performance between two PQCs of different structures, significantly outperforms the Indirect Comparison scheme by an average of 36.2% on the same dataset. While the indirect comparison scheme can enhance prediction accuracy by increasing the data volume, this approach demands substantial computational resources, particularly when executing the VQE algorithm on noisy PQCs. Additionally, if one aims to obtain a complete performance ranking of all circuits, employing a Bubble Sort algorithm can achieve this objective. The Direct Comparison scheme, in addition to providing superior circuit performance predictions over the Indirect Comparison scheme, offers an alternative perspective for the design of GNNs circuit predictors. For example, in the design of parameterized quantum circuits (PQCs), when facing a large number of candidate circuits to be screened and some high-performing reference circuits are already available, the Direct Comparison scheme can leverage these benchmark circuits to rapidly eliminate a large number of mediocre candidates, thereby reducing the cost of individually evaluating each parameterized circuit. This not only broadens the application scope of GNNs as circuit performance predictors but also contributes to the advancement of efficient PQC evaluation strategies in quantum computing.
5 Conclusion
We investigate the application of Graph Neural Networks (GNNs) to predict the outputs of quantum circuits, which can be constructed using either non-parameterized or parameterized quantum gates. The predictions encompass single-qubit expectation values, two-qubit expectation values, and the overall property of a quantum circuit. In our simulations, the feature vectors incorporate device noise information, including gate errors, , , and readout errors. As a result, the prediction performance under noisy conditions is comparable to that under noiseless conditions. Furthermore, when compared to qubit-scalable CNNs-based method, the GNNs-based approach demonstrates superior prediction performance, particularly in scenarios involving scaling from few-qubit to many-qubit circuits. In simulations comparing the Direct Comparison and Indirect Comparison schemes, the Direct Comparison scheme exhibits a significant performance improvement, establishing a robust foundation for applying GNNs-based predictors in PQCs. In the future, we aim to extend the GNNs-based predictor to predict other circuit properties using the same framework and apply it to quantum qrchitecture search (QAS) and quantum kernel design (QKD). While our simulations indicate that the predictor possesses a certain degree of extrapolation ability, how to further improve this ability is left for future investigation.
6 Appendix A: Parameters of the noise model
The noise information used in our experiments for the five noise models (IBM Perth, IBM Lagos, IBM Nairobi, IBM Jakarta, and the Customized Simulated Noise) is summarized in Tables A1–A5. In real quantum devices, the supported gate sets vary, and different quantum gates may exhibit distinct noise characteristics depending on the qubits on which they are applied. For experimental convenience and without affecting the validity of the results, we categorize the error into single-qubit gate error and two-qubit gate error, while neglecting the effects of the actual device topology. In addition, in our Customized Simulated Noise model, all qubits share identical parameter values.
E. Farhi,J. Goldstone,S. Gutmann, A quantum approximate optimization algorithm, arXiv: 2014)
[2]
D. Amaro, C. Modica, M. Rosenkranz, M. Fiorentini, M. Benedetti, and M. Lubasch, Filtering variational quantum algorithms for combinatorial optimization, Quantum Sci. Technol.7(1), 015021 (2022)
[3]
S. Heng, D. Kim, T. Kim, and Y. Han, How to solve combinatorial optimization problems using real quantum machines: A recent survey, IEEE Access10, 120106 (2022)
[4]
Y. Liu,F. Meng,Z. Li,X. Yu,Z. Zhang, Quantum approximate optimization algorithm for maximum likelihood detection in massive mimo, in: 2024 IEEE Wireless Communications and Networking Conference (WCNC), pp 1–6, IEEE, 2024
[5]
B. Gülbahar, Maximum-likelihood detection with QAOA for massive MIMO and Sherrington−Kirkpatrick model with local field at infinite size, IEEE Trans. Wirel. Commun.23(9), 11567 (2024)
[6]
X. He,H. Zeng,X. Yu, Maximum likelihood detection based on warm-start quantum optimization algorithm, in: 2024 5th Information Communication Technologies Conference (ICTC), pp 163–167, IEEE, 2024
[7]
B. Gülbahar, Majority voting with recursive qaoa and cost-restricted uniform sampling for maximum-likelihood detection in massive mimo, IEEE Trans. Wirel. Commun.24(3), 2620 (2025)
A. Peruzzo,J. McClean,P. Shadbolt,M. H. Yung,X. Q. Zhou,P. J. Love,A. Aspuru-Guzik,J. L. O’Brien, A variational eigenvalue solver on a photonic quantum processor, Nat. Commun.5(1), 4213 (2014)
[10]
R. Babbush, J. McClean, D. Wecker, A. Aspuru-Guzik, and N. Wiebe, Chemical basis of Trotter–Suzuki errors in quantum chemistry simulation, Phys. Rev. A91(2), 022311 (2015)
[11]
Y. Liu, Z. Zhang, Y. Hu, F. Meng, T. Luan, X. Zhang, and X. Yu, Practical circuit optimization algorithm for quantum simulation based on template matching, Quantum Inform. Process.23(2), 45 (2024)
[12]
A. Tranter,P. J. Love,F. Mintert,P. V. Coveney, A comparison of the Bravyi–Kitaev and Jordan–Wigner transformations for the quantum simulation of quantum chemistry, J. Chem. Theory Comput.14(11), 5617 (2018)
[13]
J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe, and S. Lloyd, Quantum machine learning, Nature549(7671), 195 (2017)
[14]
J. Cheng,H. Deng,X. Qia, ACCQOC: Accelerating quantum optimal control based pulse generation, in: 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), pp 543–555, IEEE, 2020
[15]
Z. Hu,P. Dong,Z. Wang,Y. Lin,Y. Wang,W. Jiang, Quantum neural network compression, in: Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design, pp 1–9, 2022
[16]
W. Jiang,J. Xiong,Y. Shi, A co-design framework of neural networks and quantum circuits towards quantum advantage, Nat. Commun.12(1), 579 (2021)
[17]
Z. Liang,Z. Wang,J. Yang,L. Yang,Y. Shi,W. Jiang, Can noise on qubits be learned in quantum neural network? A case study on quantumflow, in: 2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD), pp 1–7, IEEE, 2021
[18]
Z. Liang,H. Wang,J. Cheng,Y. Ding,H. Ren, ., Variational quantum pulse learning, in: 2022 IEEE International Conference on Quantum Computing and Engineering (QCE), pp 556–565, IEEE, 2022
[19]
Z. Wang,Z. Liang,S. Zhou,C. Ding,Y. Shi,W. Jiang, Exploration of quantum neural architecture by mixing quantum neuron designs, in: 2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD), pp 1–7, IEEE, 2021
[20]
J. F. Kam, H. Kang, C. D. Hill, G. J. Mooney, and L. C. L. Hollenberg, Characterization of entanglement on superconducting quantum computers of up to 414 qubits, Phys. Rev. Res.6(3), 033155 (2024)
[21]
M. AbuGhanem, IBM quantum computers: Evolution, performance, and future directions, arXiv: 2024)
[22]
M. Brooks, Beyond quantum supremacy: The hunt for useful quantum computers, Nature574(7776), 19 (2019)
[23]
Google Quantum AI and Collaborators, Quantum error correction below the surface code threshold, Nature638, 920 (2025)
[24]
B. Cheng, X. H. Deng, X. Gu, Y. He, G. Hu, . Noisy intermediate-scale quantum computers, Front. Phys. (Beijing)18(2), 21308 (2023)
[25]
J. Preskill, Quantum computing in the NISQ era and beyond, Quantum2, 79 (2018)
[26]
A. Katabarwa, K. Gratsea, A. Caesura, and P. D. Johnson, Early fault-tolerant quantum computing, PRX quantum5(2), 020101 (2024)
[27]
S. Cantori, D. Vitali, and S. Pilati, Supervised learning of random quantum circuits via scalable neural networks, Quantum Sci. Technol.8(2), 025022 (2023)
[28]
C. Lei,Y. Du,P. Mi,J. Yu,T. Liu, Neural auto-designer for enhanced quantum kernels, arXiv: 2024)
H. Liao, D. S. Wang, I. Sitdikov, C. Salcedo, A. Seif, and Z. K. Minev, Machine learning for practical quantum error mitigation, Nat. Mach. Intell.6(12), 1478 (2024)
[31]
S. An,H. Lee,J. Jo,S. Lee,S. J. Hwang, DiffusionNAG: Predictor-guided neural architecture generation with diffusion models, arXiv: 2023)
[32]
Y. J. Patel,A. Kundu,M. Ostaszewski,X. Bonet-Monroig,V. Dunjko,O. Danaci, Curriculum reinforcement learning for quantum architecture search under hardware errors, arXiv: 2024)
[33]
E. Bernstein,U. Vazirani, Quantum complexity theory, in: Proceedings of the Twenty-Fifth Annual ACM Symposium on Theory of Computing, pp 11–20, 1993
[34]
D. Deutsch,R. Jozsa, Rapid solution of problems by quantum computation, in: Proceedings of the Royal Society of London Series A439(1907), 553 (1992)
[35]
M. Treinish,J. Gambetta,P. Nation,Q. Bot,P. Kassebaum, Qiskit/qiskit: Qiskit 0.37.0 (2022)
[36]
Y. Tao, X. Zeng, Y. Fan, J. Liu, Z. Li, and J. Yang, Exploring accurate potential energy surfaces via integrating variational quantum eigensolver with machine learning, J. Phys. Chem. Lett.13(28), 6420 (2022)
[37]
K. Ghosh, S. Kumar, N. M. Rajan, and S. S. R. K. C. Yamijala, Deep neural network assisted quantum chemistry calculations on quantum computers, ACS Omega8(50), 48211 (2023)
[38]
M. A. Nielsen,I. L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2010
[39]
C. D. Bruzewicz,J. Chiaverini,R. McConnell,J. M. Sage, Trapped-ion quantum computing: Progress and challenges, Appl. Phys. Rev.6(2) (2019)
[40]
P. Krantz,M. Kjaergaard,F. Yan,T. P. Orlando,S. Gustavsson,W. D. Oliver, A quantum engineer’s guide to superconducting qubits, Appl. Phys. Rev.6(2), 021318 (2019)
[41]
E. Magesan, J. M. Gambetta, and J. Emerson, Characterizing quantum gates via randomized benchmarking, Phys. Rev. A85(4), 042311 (2012)
[42]
F. Hua,Y. Chen,Y. Jin,C. Zhang,A. Hayes,Y. Zhang,E. Z. Zhang, Autobraid: A framework for enabling efficient surface code communication in quantum computing, in: MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, pp 925–936, 2021
[43]
S. Krinner, N. Lacroix, A. Remm, A. Di Paolo, E. Genois, C. Leroux, C. Hellings, S. Lazar, F. Swiadek, J. Herrmann, G. J. Norris, C. K. Andersen, M. Müller, A. Blais, C. Eichler, and A. Wallraff, Realizing repeated quantum error correction in a distance-three surface code, Nature605(7911), 669 (2022)
[44]
G. S. Ravi,K. N. Smith,P. Gokhale,A. Mari,N. Earnest,A. Javadi-Abhari,F. T. Chong, VAQEM: A variational approach to quantum error mitigation, in: 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA), pp 288–303, IEEE, 2022
[45]
H. Wang,J. Gu,Y. Ding,Z. Li,T. Frederic,Chong,David Z Pan,Song Han. Quantumnat: quantum noise-aware training with noise injection, quantization and normalization, in: Proceedings of the 59th ACM/IEEE Design Automation Conference, pp 1–6, 2022
[46]
H. Q. Nguyen,X. B. Nguyen,S. Y. C. Chen,H. Churchill,N. Borys,S. U. Khan,K. Luu, Diffusion-inspired quantum noise mitigation in parameterized quantum circuits, arXiv: 2024)
[47]
H. Liao, D. S. Wang, I. Sitdikov, C. Salcedo, A. Seif, and Z. K. Minev, Machine learning for practical quantum error mitigation, Nat. Mach. Intell.6, 1478 (2024)
[48]
G. Corso, H. Stark, S. Jegelka, T. Jaakkola, and R. Barzilay, Graph neural networks, Nat. Rev. Methods Primers4(1), 17 (2024)
[49]
B. Khemani,S. Patil,K. Kotecha,S. Tanwar, A review of graph neural networks: Concepts, architectures, techniques, challenges, datasets, applications, and future directions, J. Big Data11(1), 18 (2024)
[50]
A. S. Mahdi,N. M. Shati, A survey on fake news detection in social media using graph neural networks, Journal of Al-Qadisiyah for Computer Science and Mathematics16(2), 23 (2024)
[51]
H. Yang, Z. Li, and Y. Qi, Predicting traffic propagation flow in urban road network with multi-graph convolutional network, Complex Intell. Syst.10(1), 23 (2024)
[52]
R. Guan, Z. Li, W. Tu, J. Wang, Y. Liu, X. Li, C. Tang, and R. Feng, Contrastive multi-view subspace clustering of hyperspectral images based on graph convolutional networks, IEEE Trans. Geosci. Remote Sens.62, 1 (2024)
[53]
Y. Xu and R. Zuo, An interpretable graph attention network for mineral prospectivity mapping, Math. Geosci.56(2), 169 (2024)
[54]
C. Wang, Y. Wang, P. Ding, S. Li, X. Yu, and B. Yu, Ml-FGAT: Identification of multi-label protein subcellular localization by interpretable graph attention networks and feature-generative adversarial networks, Comput. Biol. Med.170, 107944 (2024)
[55]
L. K. Grover, A fast quantum mechanical algorithm for database search, in: Proceedings of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pp 212–219, 1996
[56]
J. Tilly, H. Chen, S. Cao, D. Picozzi, K. Setia, Y. Li, E. Grant, L. Wossnig, I. Rungger, G. H. Booth, and J. Tennyson, The variational quantum eigensolver: A review of methods and best practices, Phys. Rep.986, 1 (2022)
[57]
S. Cantori and S. Pilati, Challenges and opportunities in the supervised learning of quantum circuit expectation values, Phys. Rev. E112(1), 015304 (2025)
RIGHTS & PERMISSIONS
Higher Education Press
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.