1. College of Physics and Electronic Engineering, Center for Computational Sciences, Sichuan Normal University, Chengdu 610068, China
2. Shenzhen SpinQ Technology Co., Ltd., Shenzhen 518045, China
3. DEEPROUTE.AI
hshiyao@sicnu.edu.cn
Show less
History+
Received
Accepted
Published
2023-06-28
2023-09-04
2024-04-15
Issue Date
Revised Date
2023-10-17
PDF
(7124KB)
Abstract
Optimization problems are prevalent in various fields, and the gradient-based gradient descent algorithm is a widely adopted optimization method. However, in classical computing, computing the numerical gradient for a function with variables necessitates at least function evaluations, resulting in a computational complexity of . As the number of variables increases, the classical gradient estimation methods require substantial resources, ultimately surpassing the capabilities of classical computers. Fortunately, leveraging the principles of superposition and entanglement in quantum mechanics, quantum computers can achieve genuine parallel computing, leading to exponential acceleration over classical algorithms in some cases. In this paper, we propose a novel quantum-based gradient calculation method that requires only a single oracle calculation to obtain the numerical gradient result for a multivariate function. The complexity of this algorithm is just . Building upon this approach, we successfully implemented the quantum gradient descent algorithm and applied it to the variational quantum eigensolver (VQE), creating a pure quantum variational optimization algorithm. Compared with classical gradient-based optimization algorithm, this quantum optimization algorithm has remarkable complexity advantages, providing an efficient solution to optimization problems.The proposed quantum-based method shows promise in enhancing the performance of optimization algorithms, highlighting the potential of quantum computing in this field.
Optimization problems are a crucial research area that involves finding parameter values that minimize or maximize an objective function subject to given constraints [1–3]. They have broad-ranging applications in diverse fields, including machine learning [4–6], finance [7–9], quantum chemical calculations [10, 11], and the pharmaceutical industry [12]. Gradient descent is one of the widely used optimization methods in solving optimization [13, 14]. The basic idea of the gradient descent algorithm is to iteratively adjust the model parameters to minimize the loss function. Specifically, the algorithm computes the gradient of the loss function for each parameter and then updates the parameters in the opposite direction of the gradient to minimize the loss function as rapidly as possible. This process is repeated multiple times until convergence [15].
With the rapid development of quantum computing, variational quantum algorithm have gained increasing attention as a means to solve optimization problems by adjusting some parameters in a quantum circuit to minimize an objective function [16, 17]. In optimization problems, as illustrated in Fig.1(a), we first create a quantum circuit with parameters to calculate the objective function. Classical optimization algorithms, such as the gradient descent algorithm, are then utilized to optimize the objective function on a classical computer. The optimized parameters are then fed back into the quantum circuit, and the optimization process continues until convergence [13, 18]. Various classical optimization algorithms, such as the conjugate gradient method [19] and Broyden−Fletcher−Goldfarb−Shanno (BFGS) algorithm [20–23], are widely used in variational quantum algorithm. It should be noted that during the optimization process, classical optimization algorithms continuously measure the quantum circuit and input the results into the classical computer for computation, which may affect the algorithm’s convergence rate. Moreover, when employing the gradient-based gradient descent algorithm on a classical computer, the numerical gradient calculation is the core of the algorithm, with a complexity of [24]. As the number of variables increases, the resource consumption grows rapidly, potentially impacting the optimization algorithm’s convergence.
Quantum computers leverage the principles of superposition and entanglement to achieve remarkable acceleration in certain problems compared to classical computers [25–33]. In the context of gradient computation, several quantum gradient calculation methods have been proposed, but they suffer from limitations in terms of applicability or complexity [24, 34–37]. Therefore, in this paper, we propose a pure quantum gradient estimation method that does not require classical computation of the gradient. For a multivariate function, this algorithm can achieve numerical gradient computation with just one oracle calculation, and its complexity is only , which does not consume additional computational resources with an increase in the number of variables. Based on this, we implemented a quantum gradient descent algorithm based on pure quantum gradient estimation. We conducted numerical experiments using this algorithm and applied it to calculate the ground state energy of a small-scale Hamiltonian in VQE [18], as shown in Fig.1(b). The optimization process of this algorithm does not depend on classical computation of the gradient. The numerical experiments demonstrate that our proposed pure quantum gradient estimation algorithm, quantum gradient descent algorithm, and its application in the Heisenberg model have produced satisfactory results. Furthermore, this pure gradient evolution algorithm has theoretical advantages in complexity, which can accelerate the convergence rate of optimization problems in large-scale problems, providing a more efficient optimization algorithm for optimization problems.
The organizational structure of this paper is as follows: In Section 2, we present a pure quantum gradient estimation algorithm. In Section 3, we describe the quantum gradient descent algorithm which is implemented using the proposed pure quantum gradient estimation algorithm. In Section 4, we perform numerical experiments on the proposed pure quantum gradient estimation algorithm and quantum gradient descent algorithm. In Section 5, we combine the quantum gradient descent algorithm with the VQE algorithm to calculate the ground state energy of the Heisenberg model to implement the full quantum variational eigensolver(FQVE). We analyze the errors of the algorithm in Section 6 and conclude with our findings in Section 7.
2 Pure quantum gradient estimation
The gradient of a function is widely used to locate extrema and offers an effective solution for certain optimization problems [24]. However, in the case of a -dimensional variable in a function , calculating the gradient necessitates at least function evaluations using the equation:
Here, denotes the th normalized basis vector [34]. As the number of parameters increases, the resources required to compute the gradient grow, often surpassing the capabilities of classical computers. In optimization problems, the computation of the function is typically the most time-consuming task when calculating gradients. Hence, minimizing the number of function evaluations is critical to improving the efficiency of optimization algorithms [34, 38].
2.1 Quantum algorithm for gradient estimation at zero
We observe the availability of a fast quantum algorithm for computing gradients at zero, as proposed in reference [34]. For both classical and quantum cases, given a sufficiently small value , it holds that
For a function of variables, the classical computation requires evaluating the function times, whereas the utilization of quantum superposition allows obtaining the gradient values of all variables using just one function evaluation. In this quantum algorithm, for variables, each variable is encoded using qubits, a superposition state of qubits is created as
Then, an oracle is constructed as , which is applied to the superposition state resulting in a phase that contains the function as
Let (where is a sufficiently small value), perform the transformation in Eq. (2), ignoring the global phase, resulting in a phase that contains the gradient as
After applying the Inverse Quantum Fourier Transform, a state containing the gradient can be obtained as
Finally, measurement in the computational basis can obtain the of the objective function. In the above equation, represents the number of variables in the function, represents the number of qubits, , is an order of magnitude estimate of the actual gradient, are -bit integers ( to ), and is a sufficiently small parameter [34].
Importantly, this algorithm is limited to computing the gradient of the objective function at zero, and its efficiency depends on the value of the parameter . Therefore, it may not be suitable for practical applications, such as optimization problems.
2.2 Pure quantum gradient estimation algorithm
We propose a pure quantum gradient estimation algorithm, as shown in Fig.2, that can efficiently estimate gradients at arbitrary points of the objective function. Our algorithm provides an effective quantum gradient estimation method for optimization problems.
When approximating the objective function, the Taylor series can be expressed for an infinitesimal quantity as follows: . In our algorithm, we set (compute the gradient at this point), (where is a sufficiently small parameter, and is a sufficiently small quantity, ), and . Therefore, we obtain
Consequently, we only need to construct an oracle containing .
For an objective function with variables, we first create a superposition state (using quantum superposition, the gradient can be computed for variables simultaneously):
We then use a separable variable quantum adder [39] to add to the superposition state, resulting in
The purpose of this operation is to shift the quantum state by to compute the gradient at this point. The separable variable quantum adder is used to add to the superposition state and ensure that the qubits do not become entangled with each other. For example, the CNOT gate can be considered as an entangled adder. When the input of qubit is , and the input of qubit is , after passing through the CNOT gate, and become entangled, and the output of is influenced by , making it unsuitable for subsequent calculations (shown in Fig.4). Therefore, we use a quantum adder without entanglement(QADD) shown in Fig.3 in this case.
In order to compute gradients at arbitrary points , we employ a quantum adder without entanglexment(QADD) to shift the quantum state by and calculate the gradient at the desired point. To estimate gradients at arbitrary locations in our algorithm, taking into account the carry after the quantum adder, we construct an oracle , where , acting on the state . Taking the example of a 2-qubit system and calculating the derivative at , the diagonal elements of this oracle can be expressed as
To achieve the desired state for the constructed oracle, we utilize a quantum adder to transform the previous equal-weight superposition state from to .
can be written as
Through the utilization of the QADD, we successfully attain the target quantum state required for applying the oracle, facilitating gradient estimation at any desired point in the objective function. By implementing subsequent operations, we are able to effectively realize gradient estimation at arbitrary points within the objective function.
Next, we construct an oracle containing in phase and apply it to the shifted superposition state above, obtaining
Using the above formula and ignoring the global phase, we obtain the quantum state containing the gradient information in phase:
To extract the phase information, we need to shift the superposition state back to the initial equal-weight position, so we need a separable quantum subtractor (shown in Fig.5), which gives
Taking the example of a 2-qubit system and calculating the derivative at , the quantum state
after passing through the quantum subtractor can be expressed as
in other words,
By utilizing the quantum subtractor, we successfully transform the quantum state into a form suitable for performing Quantum Fourier Transform.
Finally, applying the Inverse Quantum Fourier Transform yields the quantum state containing the gradient information:
where the subscription refers to the -th component of the gradient, which is a vector. In the above formula, represents the number of variables in the function, represents the number of qubits, , is an estimate of the order of magnitude of the actual gradient and can be taken as , and is an extremely small parameter.
In the above process, the gradient estimation is for the case where is positive. For the case where is negative, we can simply subtract a positive value (by adding the two’s complement of using a quantum adder) from the initial superposition state, resulting in
apply the oracle to add the gradient information to the phase:
and then add this positive value :
Finally, the Inverse Quantum Fourier Transform can be applied to obtain a quantum state that contains the gradient information at the negative value of :
During the derivation presented above, we note that the evolved gradient should have the same sign as the parameter . When is fixed to be , the gradient obtained from the algorithm is positive. However, when the actual gradient is negative, due to the periodicity of the exponential function in the Inverse Quantum Fourier Transform, for a negative integer [34],
therefore,
Thus, the algorithm can also obtain a quantum state related to the negative gradient, which, at , can be understood as the two’s complement of the negative gradient without the sign qubit.
3 Quantum gradient descent algorithm
We have implemented quantum gradient descent using the pure quantum gradient estimation algorithm. In the classical gradient descent process [19, 24, 40], , where is the learning rate, is the th iteration of the th variable. In our quantum implementation, for a positive component of the gradient, we can obtain , which can be understood as normal gradient descent, where we subtract a positive number from . For a negative component of the gradient, we can obtain . This can be interpreted as subtracting the two’s complement notation of the negative gradient from . In other words, adding the absolute value of the negative number. Thus, we have , which is no different from normal gradient descent. Therefore, whether our algorithm obtains the true gradient or the two’s complement notation of the true gradient, it does not affect the calculation process of gradient descent.
In the gradient descent formula , quantum multiplication is required, which can be challenging due to the large number of qubits needed for constructing a quantum multiplier. Therefore, in the case of limited qubits, we propose a hybrid approach that combines classical multiplication with pure quantum gradient estimation in the gradient descent process. Specifically, we use the pure quantum gradient estimation algorithm to estimate the gradient value of the objective function, and then update the parameters using classical methods. However, due to the introduction of classical computation, we need to perform measurements after each gradient estimation and update the parameters using classical computers, which may affect the calculation speed to some extent. In the long run, we expect the emergence of large-scale fault tolerant quantum computers, which will enable our quantum gradient descent algorithm to complete calculations without repeated measurements and improve its performance.
Analyzing the results of numerical simulations in Section 4.3, we found that although the algorithm partially uses classical computers, it can still effectively find the optimal value of the objective function, which can be used in practical applications.
4 Results
We conducted numerical simulations of the pure quantum gradient estimation algorithm and quantum gradient descent algorithm using Qiskit [41]. Our results showed that these algorithms were effective in estimating the gradients of the objective function, finding the optimal values of the objective function.
4.1 Gradient estimation at point zero
In this study, we used Qiskit [41] to numerically simulate the pure gradient ascent. We encoded the objective function in the ground state using four qubits, with two qubits encoding the decimal values. We set the parameters and and conducted 1000 experiments using the simulator. From Fig.6, we observed that the state had a probability of 85.2% of being in the state , which corresponds to an estimated gradient of 1.25 for the objective function. The small difference between the estimated and actual gradients suggests that the algorithm was successful in estimating the objective function’s gradient.
4.2 Gradient estimation at any point
In this section, we encoded the objective function in the ground state using four qubits, with two qubits encoding the decimal values for each variable. It transforms a binary string of length into a quantum state with qubits, where is the computational basis state. The parameters and were used, and we conducted 1000 experiments using the [41]. The gradients of the objective function at , , , and are shown in Fig.7. The results indicate that the final quantum states for each point had a probability of 100% of being in the state , 76.1% of being in the state , 76% of being in the state , and 100% of being in the state , respectively. Combining with symbolic analysis, the estimated gradient values for each point were , , , and . The small differences between the estimated and actual gradient values suggest that the quantum gradient descent algorithm successfully estimated the gradients of the objective function.
4.3 Numerical results of gradient descent algorithm
In this study, we utilized the [41] to implement gradient descent. Our numerical experiments demonstrate that the pure quantum gradient estimation algorithm-based quantum gradient descent algorithm can effectively locate the extremum of the objective function. This provides a firm foundation for applying quantum gradient descent to optimization problems.
For the objective function , Fig.8 shows the iteration process of quantum gradient descent from different starting points. All four gradient descent processes used four qubits (including two qubits encoding decimals) to encode data. The step size was used in Fig.8(a) and (b), where the starting point of Fig.8(a) was and the extremum found by the extended algorithm was , with an actual gradient of . The starting point of Fig.8(b) was , and the extremum found by the extended algorithm was , with an actual gradient of .
The step size was used in Fig.8(c) and (d), where the starting point of Fig.8(c) was , and the extremum found by the extended algorithm was , with an actual gradient of . The starting point of Fig.8(d) was , and the extremum found by the extended algorithm was , with an actual gradient of .
The actual gradients of the extremum points found by the quantum gradient descent algorithm based on pure quantum gradient estimation in all four processes were very small, indicating that the gradient at those points could be approximated as zero. Therefore, it can be concluded that this algorithm successfully found the extremum of the objective function from different starting points.
5 Full quantum variational eigensolver
Variational quantum eigensolver (VQE) is a quantum-classical hybrid algorithm with enormous potential for application on NISQ quantum devices [13, 18, 42–44]. The objective of VQE is to find the ground state and ground state energy of a given Hamiltonian . In classical computational physics, the variational method is commonly used to estimate the ground state energy of a given Hamiltonian : parameterize the wave function as , update to minimize the expectation value until convergence. On quantum computers, VQE parameterizes the wavefunction with a quantum circuit applied to the initial state , and then optimizes to minimize the expectation value , until convergence [18].
Gradient-based gradient descent is a commonly used optimization method in the VQE. In VQE, the gradient can be directly calculated using the parameter-shift rule [45, 46]:
where , is the ith unit vector in the parameter space [18]. Hardware-efficient ansatz [43], unitary coupled clustered ansatz [43, 47], and Hamiltonian variational ansatz [48, 49] are common choices for .
This article applies the quantum gradient descent algorithm to VQE to find the ground state energy of the 2-qubit Heisenberg model and compares it with the gradient-based traditional gradient descent algorithm. The Hamiltonian can be written as
where are the Pauli operators on the th qubit. The ansatz is shown in Fig.9, the initial parameter is a random number between and , and the learning rate .
Specifically, we replace the traditional gradient descent algorithm based on gradients with the pure quantum gradient estimation algorithm and quantum gradient descent algorithm to obtain the full quantum variational eigensolver(FQVE).
If we want to implement the FQVE algorithm on a fully quantum circuit and introduce our pure quantum gradient estimation algorithm, as shown in Eq. (10), we need to construct an oracle on the phase that contains the objective function, that is, we need to construct such an oracle:
We notice that the problem can be addressed using the techniques, such as block encoding technique [36, 50], Hamiltonian simulation [36, 51], described in Appendix A. However, even for small-scale model problems mentioned in this paper, the method in Appendix A requires a large number of qubits (more than 50 qubits), which makes it challenging to perform real experiments on current quantum computers or simulators. Therefore, similar to the general VQE, in this paper, we use the conventional VQE method to obtain the expected value of the Hamiltonian and then optimize it using our pure quantum gradient descent algorithm. Nevertheless, our FQVE algorithm performs similarly to the general VQE algorithm on this problem. Furthermore, we look forward to the emergence of large-scale quantum computers, where our FQVE algorithm has an advantage over the general VQE algorithm in theory, running on fully quantum computers.
We encode the parameters using four qubits, with two qubits for decimal representation. We compare the performance of the method to the classical gradient descent algorithm, and present the raw data results in Fig.10. Our experiments demonstrate that, after sufficient iterations, both the quantum and classical gradient descent algorithms can find the ground state energy of this model. Interestingly, we observe that the quantum gradient descent algorithm performs almost as well as the classical gradient descent algorithm for small-scale parameters in VQE. However, for large-scale parameters (), the quantum gradient descent algorithm (complexity ) has a significant advantage over the classical gradient descent algorithm (complexity ) in terms of complexity.
6 The impact of numerical errors
In Section 4, we conducted numerical experiments and observed that our algorithm produced a gradient of for the objective function , encoded with two qubits to represent the decimal number. As illustrated in Fig.11, increasing the number of qubits used for encoding the decimal numbers improved the precision of the algorithm’s output for the same objective function. This is because the precision of numerical representation in a quantum state increases with the number of qubits used for encoding the decimal numbers. Therefore, increasing the number of qubits used for encoding decimal numbers could improve the precision of the algorithm, given sufficient computing resources.
Furthermore, we noted that the parameter is another factor that influences the estimation of the gradient. The parameter provides an approximate estimate of the magnitude of the actual gradient, which is typically larger than the actual gradient. If is smaller than the actual gradient, the quantum state obtained by the algorithm will be larger than the value that can be represented by qubits. And as shown in Fig.12, for the objective function , the estimation provided by the algorithm becomes more accurate as the value of approaches the actual gradient.
7 Conclusion
In this work, we propose an effective pure quantum gradient estimation algorithm for estimating numerical gradients. The algorithm has a theoretical query complexity of [34], which provides significant advantages over classical algorithm with a computational complexity of . Numerical simulation results using Qiskit [41] demonstrate that the algorithm can successfully estimate the gradient of a complex objective function. Our error analysis shows that increasing the number of encoding quantum bits can improve the accuracy of the algorithm to some extent. Building upon this algorithm, we implement a quantum gradient descent algorithm, which, despite being limited by hardware qubit counts, performs well in finding the extremum of a relatively complex objective function, as demonstrated by our numerical simulations. When solving for the ground state energy of a small-scale Hamiltonian [14, 18], the performance of our full quantum variational eigensolver is comparable to that of variational quantum eigensolver. However, for large-scale Hamiltonian, our algorithm has a significant theoretical advantage. These results suggest that our algorithm provides a more efficient quantum optimization method than classical optimization algorithm, which theoretically can improve convergence speed and increase the precision of optimization problems.
E.FarhiJ.Goldstone, A quantum approximate optimization algorithm, arXiv: 1411.4028 (2014)
[2]
E.FarhiJ.GoldstoneS.GutmannM.Sipser, Quantum computation by adiabatic evolution, arXiv: quant-ph/0001106 (2000)
[3]
G. G. Guerreschi, A. Y. Matsuura. Qaoa for max-cut requires hundreds of qubits for quantum speed-up. Sci. Rep., 2019, 6(1): 6903
[4]
M.A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015
[5]
M.SchuldI.Sinayskiy, The quest for a quantum neural network, arXiv: 1408.7005 (2014)
[6]
V. Dunjko, J. M. Taylor, H. J. Briegel. Quantumenhanced machine learning. Phys. Rev. Lett., 2016, 117(13): 130501
[7]
T.Sakuma, Application of deep quantum neural networks to finance, arXiv: 2011.07319 (2020)
[8]
R.OrusS.MugelE.Lizaso, Quantum computing for finance: Overview and prospects, arXiv: 1807.03890v2 (2018)
[9]
D. J. Egger, C. Gambella, J. Marecek, S. McFaddin, M. Mevissen, R. Raymond, A. Simonetto, S. Woerner, E. Yndurain. Quantum computing for finance: State-of-the-art and future prospects. IEEE Trans. Quant. Eng., 2020, 1: 3101724
[10]
R. P. Feynmen. Forces in molecules. Phys. Rev. Lett., 1939, 56: 340
[11]
D. A. Fedorov, M. J. Otten, S. K. Gray, Y. Alexeev. Ab initio molecular dynamics on quantum computers. J. Chem. Phys., 2021, 154(16): 164103
[12]
V. Gandhi, G. Prasad, D. Coyle, L. Behera, T. M. McGinnity. Quantum neural network-based EEG filtering for a brain–computer interface. IEEE Trans. Neural Netw. Learn. Syst., 2014, 25(2): 278
[13]
A.PeruzzoJ.McCleanP.Shadbolt, ., A variational eigenvalue solver on a quantum processor, arXiv: 1304.3061 (2013)
[14]
S. Wei, H. Li, G. Long. A full quantum eigensolver for quantum chemistry simulations. Research, 2020, 2020: 1486935
[15]
S.Ruder, An overview of gradient descent optimization algorithms, arXiv: 1609.04747 (2016)
[16]
J. R. McClean, J. Romero, R. Babbush, A. Aspuru-Guzik. The theory of variational hybrid quantumclassical algorithms. New J. Phys., 2016, 18(2): 023023
S. Y. Hou, G. Feng, Z. Wu, H. Zou, W. Shi, J. Zeng, C. Cao, S. Yu, Z. Sheng, X. Rao, B. Ren, D. Lu, J. Zou, G. Miao, J. Xiang, B. Zeng. Spinq gemini: A desktop quantum computing platform for education and research. EPJ Quantum Technol., 2021, 8(1): 20
[19]
G. Yuan, T. Li, W. Hu. A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration. J. Inequal. Appl., 2019, 2019(1): 247
[20]
C. G. Broyden. The convergence of a class of double-rank minimization algorithms (1): General considerations. IMA J. Appl. Math., 1970, 6(1): 76
[21]
R. Fletcher. A new approach to variable metric algorithms. Comput. J., 1970, 13(3): 317
[22]
D. Goldfarb. A family of variable-metric methods derived by variational means. Math. Comput., 1970, 24(109): 23
[23]
D. F. Shanno. Conditioning of quasi-Newton methods for function minimization. Math. Comput., 1970, 24(111): 647
[24]
P. Gao, K. Li, S. Wei, J. Gao, G. Long. Quantum gradient algorithm for general polynomials. Phys. Rev. A, 2021, 103(4): 042403
[25]
M.NielsenI.Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2010
[26]
D. P. DiVincenzo. Quantum computation. Science, 1995, 270(5234): 255
[27]
J. Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2018, 2: 79
[28]
P.W. Shor, Algorithms for quantum computation: Discrete logarithms and factoring, in: Proceedings 35th Annual Symposium on Foundations of Computer Science, 1994, pp 124–134
[29]
T. Monz, D. Nigg, E. A. Martinez, M. F. Brandl, P. Schindler, R. Rines, S. X. Wang, I. L. Chuang, R. Blatt. Realization of a scalable Shor algorithm. Science, 2016, 351(6277): 1068
[30]
L. K. Grover. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett., 1997, 79(2): 325
[31]
G. L. Long. Grover algorithm with zero theoretical failure rate. Phys. Rev. A, 2001, 64(2): 022307
[32]
A. W. Harrow, A. Hassidim, S. Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 2009, 103(15): 150502
[33]
B. Duan, J. Yuan, C. H. Yu, J. Huang, C. Y. Hsieh. A survey on HHLalgorithm: From theory to application in quantum machine learning. Phys. Lett. A, 2020, 384(24): 126595
[34]
S. P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 2005, 95(5): 050501
[35]
R.WiersemaD.LewisD.WierichsJ.CarrasquillaN.Killoran, Here comes the SU(n): multivariate quantum gates and gradients, arXiv: 2303.11355 (2023)
[36]
A.GilyénS.ArunachalamN.Wiebe, Optimizing quantum optimization algorithms via faster quantum gradient computation, in: Proceedings of the 2019 Annual ACM-SIAM Symposium on Discrete Algorithms, 2019, pp 1425–1444
[37]
J. Li. General explicit difference formulas for numerical differentiation. J. Comput. Appl. Math., 2005, 183(1): 29
[38]
W.H. PressS.A. TeukolskyW.T. Vetterling, Numerical Recipes in C, Cambridge University Press, 1992
[39]
Y.LiM.Dou, A quantum addition operation method, device, electronic device and storage medium, Origin Quantum (2021)
[40]
C. Zhu, R. H. Byrd, P. Lu, J. Nocedal. Algorithm 778: L-BFGS-B. ACM Trans. Math. Softw., 1997, 23(4): 550
[41]
G.Aleksandrowicz, ., Qiskit: An open-source framework for quantum computing, 2019
[42]
R. M. Parrish, E. G. Hohenstein, P. L. McMahon, T. J. Martínez. Quantum computation of electronic transitions using a variational quantum eigensolver. Phys. Rev. Lett., 2019, 122(23): 230401
[43]
A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, J. M. Gambetta. Hardwareefficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 2017, 549(7671): 242
[44]
C. Hempel, C. Maier, J. Romero, J. McClean, T. Monz, H. Shen, P. Jurcevic, B. P. Lanyon, P. Love, R. Babbush, A. Aspuru-Guzik, R. Blatt, C. F. Roos. Quantum chemistry calculations on a trapped-ion quantum simulator. Phys. Rev. X, 2018, 8(3): 031022
[45]
K. Mitarai, M. Negoro, M. Kitagawa, K. Fujii. Quantum circuit learning. Phys. Rev. A, 2018, 98(3): 032309
[46]
M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, N. Killoran. Evaluating analytic gradients on quantum hardware. Phys. Rev. A, 2019, 99(3): 032331
[47]
J. Lee, W. J. Huggins, M. Head-Gordon, K. B. Whaley. Generalized unitary coupled cluster wave functions for quantum computation. J. Chem. Theory Comput., 2019, 15(1): 311
[48]
D. Wecker, M. B. Hastings, M. Troyer. Progress towards practical quantum variational algorithms. Phys. Rev. A, 2015, 92(4): 042303
[49]
R. Wiersema, C. Zhou, Y. de Sereville, J. F. Carrasquilla, Y. B. Kim, H. Yuen. Exploring entanglement and optimization within the hamiltonian variational ansatz. PRX Quantum, 2020, 1(2): 020319
[50]
A.GilyénY.SuG.H. LowN.Wiebe, Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics, arXiv: 1806.01838 (2018)
[51]
G.H. LowI.L. Chuang, Hamiltonian simulation by qubitization, arXiv: 1610.06546 (2016)
[52]
A.M. ChildsN.Wiebe, Hamiltonian simulation using linear combinations of unitary operations, arXiv: 1202.5822 (2012)
[53]
L.Bottou, Large-scale machine learning with stochastic gradient descent, in: Proceedings of COMPSTAT’2010, edited by Y. Lechevallier and G. Saporta, Physica-Verlag HD, Heidelberg, 2010, pp 177–186
RIGHTS & PERMISSIONS
Higher Education Press
AI Summary 中Eng×
Note: Please be aware that the following content is generated by artificial intelligence. This website is not responsible for any consequences arising from the use of this content.