Pure quantum gradient descent algorithm and full quantum variational eigensolver

Ronghang Chen , Zhou Guang , Cong Guo , Guanru Feng , Shi-Yao Hou

Front. Phys. ›› 2024, Vol. 19 ›› Issue (2) : 21202

PDF (7124KB)
Front. Phys. ›› 2024, Vol. 19 ›› Issue (2) : 21202 DOI: 10.1007/s11467-023-1346-7
RESEARCH ARTICLE

Pure quantum gradient descent algorithm and full quantum variational eigensolver

Author information +
History +
PDF (7124KB)

Abstract

Optimization problems are prevalent in various fields, and the gradient-based gradient descent algorithm is a widely adopted optimization method. However, in classical computing, computing the numerical gradient for a function with d variables necessitates at least d+ 1 function evaluations, resulting in a computational complexity of O(d). As the number of variables increases, the classical gradient estimation methods require substantial resources, ultimately surpassing the capabilities of classical computers. Fortunately, leveraging the principles of superposition and entanglement in quantum mechanics, quantum computers can achieve genuine parallel computing, leading to exponential acceleration over classical algorithms in some cases. In this paper, we propose a novel quantum-based gradient calculation method that requires only a single oracle calculation to obtain the numerical gradient result for a multivariate function. The complexity of this algorithm is just O(1). Building upon this approach, we successfully implemented the quantum gradient descent algorithm and applied it to the variational quantum eigensolver (VQE), creating a pure quantum variational optimization algorithm. Compared with classical gradient-based optimization algorithm, this quantum optimization algorithm has remarkable complexity advantages, providing an efficient solution to optimization problems.The proposed quantum-based method shows promise in enhancing the performance of optimization algorithms, highlighting the potential of quantum computing in this field.

Graphical abstract

Keywords

quantum algorithm / gradient descent / variational quantum algorithm

Cite this article

Download citation ▾
Ronghang Chen, Zhou Guang, Cong Guo, Guanru Feng, Shi-Yao Hou. Pure quantum gradient descent algorithm and full quantum variational eigensolver. Front. Phys., 2024, 19(2): 21202 DOI:10.1007/s11467-023-1346-7

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Optimization problems are a crucial research area that involves finding parameter values that minimize or maximize an objective function subject to given constraints [13]. They have broad-ranging applications in diverse fields, including machine learning [46], finance [79], quantum chemical calculations [10, 11], and the pharmaceutical industry [12]. Gradient descent is one of the widely used optimization methods in solving optimization [13, 14]. The basic idea of the gradient descent algorithm is to iteratively adjust the model parameters to minimize the loss function. Specifically, the algorithm computes the gradient of the loss function for each parameter and then updates the parameters in the opposite direction of the gradient to minimize the loss function as rapidly as possible. This process is repeated multiple times until convergence [15].

With the rapid development of quantum computing, variational quantum algorithm have gained increasing attention as a means to solve optimization problems by adjusting some parameters in a quantum circuit to minimize an objective function [16, 17]. In optimization problems, as illustrated in Fig.1(a), we first create a quantum circuit with parameters to calculate the objective function. Classical optimization algorithms, such as the gradient descent algorithm, are then utilized to optimize the objective function on a classical computer. The optimized parameters are then fed back into the quantum circuit, and the optimization process continues until convergence [13, 18]. Various classical optimization algorithms, such as the conjugate gradient method [19] and Broyden−Fletcher−Goldfarb−Shanno (BFGS) algorithm [2023], are widely used in variational quantum algorithm. It should be noted that during the optimization process, classical optimization algorithms continuously measure the quantum circuit and input the results into the classical computer for computation, which may affect the algorithm’s convergence rate. Moreover, when employing the gradient-based gradient descent algorithm on a classical computer, the numerical gradient calculation is the core of the algorithm, with a complexity of O(d) [24]. As the number of variables d increases, the resource consumption grows rapidly, potentially impacting the optimization algorithm’s convergence.

Quantum computers leverage the principles of superposition and entanglement to achieve remarkable acceleration in certain problems compared to classical computers [2533]. In the context of gradient computation, several quantum gradient calculation methods have been proposed, but they suffer from limitations in terms of applicability or complexity [24, 3437]. Therefore, in this paper, we propose a pure quantum gradient estimation method that does not require classical computation of the gradient. For a multivariate function, this algorithm can achieve numerical gradient computation with just one oracle calculation, and its complexity is only O(1), which does not consume additional computational resources with an increase in the number of variables. Based on this, we implemented a quantum gradient descent algorithm based on pure quantum gradient estimation. We conducted numerical experiments using this algorithm and applied it to calculate the ground state energy of a small-scale Hamiltonian in VQE [18], as shown in Fig.1(b). The optimization process of this algorithm does not depend on classical computation of the gradient. The numerical experiments demonstrate that our proposed pure quantum gradient estimation algorithm, quantum gradient descent algorithm, and its application in the Heisenberg model have produced satisfactory results. Furthermore, this pure gradient evolution algorithm has theoretical advantages in complexity, which can accelerate the convergence rate of optimization problems in large-scale problems, providing a more efficient optimization algorithm for optimization problems.

The organizational structure of this paper is as follows: In Section 2, we present a pure quantum gradient estimation algorithm. In Section 3, we describe the quantum gradient descent algorithm which is implemented using the proposed pure quantum gradient estimation algorithm. In Section 4, we perform numerical experiments on the proposed pure quantum gradient estimation algorithm and quantum gradient descent algorithm. In Section 5, we combine the quantum gradient descent algorithm with the VQE algorithm to calculate the ground state energy of the Heisenberg model to implement the full quantum variational eigensolver(FQVE). We analyze the errors of the algorithm in Section 6 and conclude with our findings in Section 7.

2 Pure quantum gradient estimation

The gradient of a function is widely used to locate extrema and offers an effective solution for certain optimization problems [24]. However, in the case of a d-dimensional variable in a function f, calculating the gradient necessitates at least d+1 function evaluations using the equation:

fxi=f (xlei)xl.

Here, ei denotes the ith normalized basis vector [34]. As the number of parameters increases, the resources required to compute the gradient grow, often surpassing the capabilities of classical computers. In optimization problems, the computation of the function is typically the most time-consuming task when calculating gradients. Hence, minimizing the number of function evaluations is critical to improving the efficiency of optimization algorithms [34, 38].

2.1 Quantum algorithm for gradient estimation at zero

We observe the availability of a fast quantum algorithm for computing gradients at zero, as proposed in reference [34]. For both classical and quantum cases, given a sufficiently small value x, it holds that

f( x)f( 0)+xf.

For a function of d variables, the classical computation requires evaluating the function d+1 times, whereas the utilization of quantum superposition allows obtaining the gradient values of all variables using just one function evaluation. In this quantum algorithm, for d variables, each variable is encoded using n qubits, a superposition state of n×d qubits is created as

| 0H 1Nd δ=0N1|δ.

Then, an oracle is constructed as Of= e2πi(N/ml) f(x), which is applied to the superposition state resulting in a phase that contains the function as

1 Nd δ=0N1|δOf1 Nd δ=0N1e2π iN mlf (x)|δ.

Let x=(l /N )δ (where l is a sufficiently small value), perform the transformation in Eq. (2), ignoring the global phase, resulting in a phase that contains the gradient as

1 Nd δ=0N1e 2πim[δ1 f1+ δ2f2++ δdfd]|δ.

After applying the Inverse Quantum Fourier Transform, a state containing the gradient can be obtained as

|Nm(f) 1| Nm(f )2 | Nm(f)d.

Finally, measurement in the computational basis can obtain the f of the objective function. In the above equation, d represents the number of variables in the function, n represents the number of qubits, N= 2n, m is an order of magnitude estimate of the actual gradient, δ are n-bit integers (0 to N 1), and l is a sufficiently small parameter [34].

Importantly, this algorithm is limited to computing the gradient of the objective function at zero, and its efficiency depends on the value of the parameter m. Therefore, it may not be suitable for practical applications, such as optimization problems.

2.2 Pure quantum gradient estimation algorithm

We propose a pure quantum gradient estimation algorithm, as shown in Fig.2, that can efficiently estimate gradients at arbitrary points of the objective function. Our algorithm provides an effective quantum gradient estimation method for optimization problems.

When approximating the objective function, the Taylor series can be expressed for an infinitesimal quantity (xx0) as follows: f(x) f(x 0)+ f(x0 )(xx0). In our algorithm, we set x0=Δx (compute the gradient at this point), x 1= (l/N)δ (where l is a sufficiently small parameter, and x1 is a sufficiently small quantity, δ =2n+11,N=2n), and x=x1+Δx. Therefore, we obtain

f( x)f( Δ x)+x 1f(Δx).

Consequently, we only need to construct an oracle containing f(x).

For an objective function with d variables, we first create a superposition state (using quantum superposition, the gradient can be computed for d variables simultaneously):

1 Nd δ=0N1|δ.

We then use a separable variable quantum adder [39] to add Δ x to the superposition state, resulting in

1 Nd δ=0N1|δ+Δx.

The purpose of this operation is to shift the quantum state by Δ x to compute the gradient at this point. The separable variable quantum adder is used to add Δ x to the superposition state and ensure that the qubits do not become entangled with each other. For example, the CNOT gate can be considered as an entangled adder. When the input of qubit q0 is 2/2(|0+ | 1 ), and the input of qubit q1 is | 0, after passing through the CNOT gate, q0 and q1 become entangled, and the output of q1 is influenced by q 0, making it unsuitable for subsequent calculations (shown in Fig.4). Therefore, we use a quantum adder without entanglement(QADD) shown in Fig.3 in this case.

In order to compute gradients at arbitrary points (Δ x), we employ a quantum adder without entanglexment(QADD) to shift the quantum state by Δ x and calculate the gradient at the desired point. To estimate gradients at arbitrary locations in our algorithm, taking into account the carry after the quantum adder, we construct an oracle O f=e2πi(N/ml) f(x), where x =x1+Δx,x 1 =(l/N)δ , δ= 2n +11,N=2 n, acting on the state | δ+Δx. Taking the example of a 2-qubit system and calculating the derivative at Δ x=1, the diagonal elements of this oracle can be expressed as

e2πi(Nml) [f(0lN+Δx) f(1lN+Δ x) f(2 l N+Δx)f(3lN+Δx) f(4lN+Δ x) f(5 l N+Δx)f(6lN+Δx) f(7lN+Δ x)].

To achieve the desired state for the constructed oracle, we utilize a quantum adder to transform the previous equal-weight superposition state from | 0+ | 1+ | 2+ | 3 to |1+ | 2+ | 3+ | 4.

Of1 Ndδ=0N1|δ+Δx =1Nd δ=0,δ =0N1 ,2 n+11e2πi(Nml)f(Δ x)lNδ |δ+Δ x,

can be written as

e 2πimf(Δx)0 |1+e 2πimf(Δx)1 |2 + e2 πimf(Δx)2 | 3+ e 2πimf(Δx)3 |4.

Through the utilization of the QADD, we successfully attain the target quantum state required for applying the oracle, facilitating gradient estimation at any desired point in the objective function. By implementing subsequent operations, we are able to effectively realize gradient estimation at arbitrary points within the objective function.

Next, we construct an oracle containing f(x) in phase and apply it to the shifted superposition state above, obtaining

1 Nd δ=0N1|δ+ΔxOf1 Nd δ=0N1e2π i(Nml)f(x)|δ+Δ x.

Using the above formula and ignoring the global phase, we obtain the quantum state containing the gradient information in phase:

1 Nd δ=0N1e2πi/m[δ1(f) 1+δ2(f)2++δ d(f)d]|δ+Δ x.

To extract the phase information, we need to shift the superposition state back to the initial equal-weight position, so we need a separable quantum subtractor (shown in Fig.5), which gives

1 Nd δ=0N1e 2πim[δ1( f)1+ δ2(f )2+ +δd( f)d]|δ.

Taking the example of a 2-qubit system and calculating the derivative at Δx=1, the quantum state

1 Nd δ=0,δ =0N1 ,2 n+11e2πi(Nml)f(Δ x)lNδ |δ+Δ x,

after passing through the quantum subtractor can be expressed as

1 Nd δ=0,δ =0N1 ,2 n+11e2πi(Nml)f(Δ x)lNδ |δ,

in other words,

e 2πimf(Δx)0 |0+e 2πimf(Δx)1 |1 + e2 πimf(Δx)2 | 2+ e 2πimf(Δx)3 |3.

By utilizing the quantum subtractor, we successfully transform the quantum state into a form suitable for performing Quantum Fourier Transform.

Finally, applying the Inverse Quantum Fourier Transform yields the quantum state containing the gradient information:

|Nm(f(Δ x))1|Nm(f(Δ x))i|Nm(f(Δ x))d.

where the subscription i refers to the i-th component of the gradient, which is a vector. In the above formula, d represents the number of variables in the function, n represents the number of qubits, N= 2n, m is an estimate of the order of magnitude of the actual gradient and can be taken as 2 n, and l is an extremely small parameter.

In the above process, the gradient estimation is for the case where Δ x is positive. For the case where Δ x is negative, we can simply subtract a positive value Δx+ (by adding the two’s complement of Δ x using a quantum adder) from the initial superposition state, resulting in

1 Nd δ=0N1|δΔx+,

apply the oracle to add the gradient information to the phase:

1 Nd δ=0N1e2πi/m[δ1(f) 1+δ2(f)2++δ d(f)d]|δΔx+,

and then add this positive value Δx+:

1 Nd δ=0N1e2πi/m[δ1(f) 1+δ2(f)2++δ d(f)d]|δ.

Finally, the Inverse Quantum Fourier Transform can be applied to obtain a quantum state that contains the gradient information at the negative value of Δ x:

|Nm(f(Δ x))1|Nm(f(Δ x))i|Nm(f(Δ x))d.

During the derivation presented above, we note that the evolved gradient should have the same sign as the parameter m. When m is fixed to be 2n, the gradient obtained from the algorithm is positive. However, when the actual gradient is negative, due to the periodicity of the exponential function in the Inverse Quantum Fourier Transform, for a negative integer k [34],

1 Nj=0N1e2πij/N|k||j |N | k|,

therefore,

1 Nδ=0N1e2πi| f m |δ|δ| N|Nmf |.

Thus, the algorithm can also obtain a quantum state related to the negative gradient, which, at m=2n, can be understood as the two’s complement of the negative gradient without the sign qubit.

3 Quantum gradient descent algorithm

We have implemented quantum gradient descent using the pure quantum gradient estimation algorithm. In the classical gradient descent process [19, 24, 40], xik+1=xikαf(xik), where α is the learning rate, xik is the kth iteration of the ith variable. In our quantum implementation, for a positive component f(x ik) of the gradient, we can obtain | xik+ 1=|xik α((N/m)f), which can be understood as normal gradient descent, where we subtract a positive number from xik. For a negative component f(x ik) of the gradient, we can obtain |xik+1=|x ikα(N|(N /m) f|). This can be interpreted as subtracting the two’s complement notation of the negative gradient from xik. In other words, adding the absolute value of the negative number. Thus, we have | xi k+1= |xik+ α(|f(xik)|), which is no different from normal gradient descent. Therefore, whether our algorithm obtains the true gradient or the two’s complement notation of the true gradient, it does not affect the calculation process of gradient descent.

In the gradient descent formula xik+ 1=xikαf(xik), quantum multiplication is required, which can be challenging due to the large number of qubits needed for constructing a quantum multiplier. Therefore, in the case of limited qubits, we propose a hybrid approach that combines classical multiplication with pure quantum gradient estimation in the gradient descent process. Specifically, we use the pure quantum gradient estimation algorithm to estimate the gradient value of the objective function, and then update the parameters using classical methods. However, due to the introduction of classical computation, we need to perform measurements after each gradient estimation and update the parameters using classical computers, which may affect the calculation speed to some extent. In the long run, we expect the emergence of large-scale fault tolerant quantum computers, which will enable our quantum gradient descent algorithm to complete calculations without repeated measurements and improve its performance.

Analyzing the results of numerical simulations in Section 4.3, we found that although the algorithm partially uses classical computers, it can still effectively find the optimal value of the objective function, which can be used in practical applications.

4 Results

We conducted numerical simulations of the pure quantum gradient estimation algorithm and quantum gradient descent algorithm using Qiskit [41]. Our results showed that these algorithms were effective in estimating the gradients of the objective function, finding the optimal values of the objective function.

4.1 Gradient estimation at point zero

In this study, we used Qiskit [41] to numerically simulate the pure gradient ascent. We encoded the objective function f(x)=1.3x in the ground state using four qubits, with two qubits encoding the decimal values. We set the parameters N=4 and m=4 and conducted 1000 experiments using the qasm-sim ulato r simulator. From Fig.6, we observed that the state |( N/ m)f had a probability of 85.2% of being in the state |0101, which corresponds to an estimated gradient of 1.25 for the objective function. The small difference between the estimated and actual gradients suggests that the algorithm was successful in estimating the objective function’s gradient.

4.2 Gradient estimation at any point

In this section, we encoded the objective function f(x1, x2)=0.1( x1+ x22)2+0.1(1+ x22 )2 in the ground state using four qubits, with two qubits encoding the decimal values for each variable. It transforms a binary string x of length n into a quantum state |x= | ix with n qubits, where |ix is the computational basis state. The parameters N=4 and m=4 were used, and we conducted 1000 experiments using the qasm- simul ator [41]. The gradients of the objective function at (1,1.5 ), (2,1.25), (2,1.25), and (1,1.5) are shown in Fig.7. The results indicate that the final quantum states for each point had a probability of 100% of being in the state | 00010110, 76.1% of being in the state |00111000, 76% of being in the state | 00111000, and 100% of being in the state |00010110, respectively. Combining with symbolic analysis, the estimated gradient values for each point were (0.25 ,1.5), (0.75,2), (0.75 ,2), and (0.25 ,1.5). The small differences between the estimated and actual gradient values suggest that the quantum gradient descent algorithm successfully estimated the gradients of the objective function.

4.3 Numerical results of gradient descent algorithm

In this study, we utilized the qasm-sim ulato r [41] to implement gradient descent. Our numerical experiments demonstrate that the pure quantum gradient estimation algorithm-based quantum gradient descent algorithm can effectively locate the extremum of the objective function. This provides a firm foundation for applying quantum gradient descent to optimization problems.

For the objective function f(x 1,x2)=0.1 (x1+x22)2+0.1(1+x22)2, Fig.8 shows the iteration process of quantum gradient descent from different starting points. All four gradient descent processes used four qubits (including two qubits encoding decimals) to encode data. The step size α= 0.05 was used in Fig.8(a) and (b), where the starting point of Fig.8(a) was (x1,x2 )=(0.7,1.6) and the extremum found by the extended algorithm was (x1 ,x 2 )=(0.825 ,0.994), with an actual gradient of (0.033 ,0.06). The starting point of Fig.8(b) was (x1,x2 )=(1,1.6 ), and the extremum found by the extended algorithm was ( x1 ,x2)=(1.09, 1.05), with an actual gradient of (0.0025, 0.048).

The step size α=0.3 was used in Fig.8(c) and (d), where the starting point of Fig.8(c) was (x1,x2 )=(2,1 ), and the extremum found by the extended algorithm was ( x1 ,x2)=(0.14, 0.625), with an actual gradient of (0.05 ,0.09). The starting point of Fig.8(d) was ( x1, x2) =(2,1.5), and the extremum found by the extended algorithm was (x1 ,x 2 )=(0.2875 ,0.75), with an actual gradient of (0.055,0.04875 ).

The actual gradients of the extremum points found by the quantum gradient descent algorithm based on pure quantum gradient estimation in all four processes were very small, indicating that the gradient at those points could be approximated as zero. Therefore, it can be concluded that this algorithm successfully found the extremum of the objective function from different starting points.

5 Full quantum variational eigensolver

Variational quantum eigensolver (VQE) is a quantum-classical hybrid algorithm with enormous potential for application on NISQ quantum devices [13, 18, 4244]. The objective of VQE is to find the ground state and ground state energy of a given Hamiltonian H. In classical computational physics, the variational method is commonly used to estimate the ground state energy of a given Hamiltonian H: parameterize the wave function as |ψ=|ψ(θ), update θ to minimize the expectation value ψ(θ)|H|ψ(θ) until convergence. On quantum computers, VQE parameterizes the wavefunction with a quantum circuit U(θ) applied to the initial state | 0= |0 n, and then optimizes θ to minimize the expectation value E(θ)= 0 | U(θ)HU (θ)|0, until convergence [18].

Gradient-based gradient descent is a commonly used optimization method in the VQE. In VQE, the gradient can be directly calculated using the parameter-shift rule [45, 46]:

E(θ) θ i= (Hθ i+ Hθi )/2,

where θi±=θ±ei, ei is the ith unit vector in the parameter space [18]. Hardware-efficient ansatz [43], unitary coupled clustered ansatz [43, 47], and Hamiltonian variational ansatz [48, 49] are common choices for U(θ).

This article applies the quantum gradient descent algorithm to VQE to find the ground state energy of the 2-qubit Heisenberg model and compares it with the gradient-based traditional gradient descent algorithm. The Hamiltonian can be written as

Hh= X1X2+Y 1Y2+Z1Z2,

where Xi,Y i,Zi are the Pauli operators on the ith qubit. The ansatz is shown in Fig.9, the initial parameter θ is a random number between 0 and π, and the learning rate α=0.25.

Specifically, we replace the traditional gradient descent algorithm based on gradients with the pure quantum gradient estimation algorithm and quantum gradient descent algorithm to obtain the full quantum variational eigensolver(FQVE).

If we want to implement the FQVE algorithm on a fully quantum circuit and introduce our pure quantum gradient estimation algorithm, as shown in Eq. (10), we need to construct an oracle on the phase that contains the objective function, that is, we need to construct such an oracle:

Of: e2πi(Nml) ψ θ|Hh| ψθ.

We notice that the problem can be addressed using the techniques, such as block encoding technique [36, 50], Hamiltonian simulation [36, 51], described in Appendix A. However, even for small-scale model problems mentioned in this paper, the method in Appendix A requires a large number of qubits (more than 50 qubits), which makes it challenging to perform real experiments on current quantum computers or simulators. Therefore, similar to the general VQE, in this paper, we use the conventional VQE method to obtain the expected value of the Hamiltonian and then optimize it using our pure quantum gradient descent algorithm. Nevertheless, our FQVE algorithm performs similarly to the general VQE algorithm on this problem. Furthermore, we look forward to the emergence of large-scale quantum computers, where our FQVE algorithm has an advantage over the general VQE algorithm in theory, running on fully quantum computers.

We encode the parameters using four qubits, with two qubits for decimal representation. We compare the performance of the method to the classical gradient descent algorithm, and present the raw data results in Fig.10. Our experiments demonstrate that, after sufficient iterations, both the quantum and classical gradient descent algorithms can find the ground state energy of this model. Interestingly, we observe that the quantum gradient descent algorithm performs almost as well as the classical gradient descent algorithm for small-scale parameters in VQE. However, for large-scale parameters ( N), the quantum gradient descent algorithm (complexity O(1)) has a significant advantage over the classical gradient descent algorithm (complexity O(N)) in terms of complexity.

6 The impact of numerical errors

In Section 4, we conducted numerical experiments and observed that our algorithm produced a gradient of 1.25 for the objective function f=1.3x, encoded with two qubits to represent the decimal number. As illustrated in Fig.11, increasing the number of qubits used for encoding the decimal numbers improved the precision of the algorithm’s output for the same objective function. This is because the precision of numerical representation in a quantum state increases with the number of qubits used for encoding the decimal numbers. Therefore, increasing the number of qubits used for encoding decimal numbers could improve the precision of the algorithm, given sufficient computing resources.

Furthermore, we noted that the parameter m is another factor that influences the estimation of the gradient. The parameter m provides an approximate estimate of the magnitude of the actual gradient, which is typically larger than the actual gradient. If m is smaller than the actual gradient, the quantum state |(N/m)f(Δ x) obtained by the algorithm will be larger than the value N that can be represented by n qubits. And as shown in Fig.12, for the objective function f=1.3x, the estimation provided by the algorithm becomes more accurate as the value of m approaches the actual gradient.

7 Conclusion

In this work, we propose an effective pure quantum gradient estimation algorithm for estimating numerical gradients. The algorithm has a theoretical query complexity of O(1) [34], which provides significant advantages over classical algorithm with a computational complexity of O(d). Numerical simulation results using Qiskit [41] demonstrate that the algorithm can successfully estimate the gradient of a complex objective function. Our error analysis shows that increasing the number of encoding quantum bits can improve the accuracy of the algorithm to some extent. Building upon this algorithm, we implement a quantum gradient descent algorithm, which, despite being limited by hardware qubit counts, performs well in finding the extremum of a relatively complex objective function, as demonstrated by our numerical simulations. When solving for the ground state energy of a small-scale Hamiltonian [14, 18], the performance of our full quantum variational eigensolver is comparable to that of variational quantum eigensolver. However, for large-scale Hamiltonian, our algorithm has a significant theoretical advantage. These results suggest that our algorithm provides a more efficient quantum optimization method than classical optimization algorithm, which theoretically can improve convergence speed and increase the precision of optimization problems.

References

[1]

E.FarhiJ. Goldstone, A quantum approximate optimization algorithm, arXiv: 1411.4028 (2014)

[2]

E.FarhiJ. GoldstoneS.GutmannM.Sipser, Quantum computation by adiabatic evolution, arXiv: quant-ph/0001106 (2000)

[3]

G. G. Guerreschi, A. Y. Matsuura. Qaoa for max-cut requires hundreds of qubits for quantum speed-up. Sci. Rep., 2019, 6(1): 6903

[4]

M.A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015

[5]

M.SchuldI. Sinayskiy, The quest for a quantum neural network, arXiv: 1408.7005 (2014)

[6]

V. Dunjko, J. M. Taylor, H. J. Briegel. Quantumenhanced machine learning. Phys. Rev. Lett., 2016, 117(13): 130501

[7]

T.Sakuma, Application of deep quantum neural networks to finance, arXiv: 2011.07319 (2020)

[8]

R.OrusS. MugelE.Lizaso, Quantum computing for finance: Overview and prospects, arXiv: 1807.03890v2 (2018)

[9]

D. J. Egger, C. Gambella, J. Marecek, S. McFaddin, M. Mevissen, R. Raymond, A. Simonetto, S. Woerner, E. Yndurain. Quantum computing for finance: State-of-the-art and future prospects. IEEE Trans. Quant. Eng., 2020, 1: 3101724

[10]

R. P. Feynmen. Forces in molecules. Phys. Rev. Lett., 1939, 56: 340

[11]

D. A. Fedorov, M. J. Otten, S. K. Gray, Y. Alexeev. Ab initio molecular dynamics on quantum computers. J. Chem. Phys., 2021, 154(16): 164103

[12]

V. Gandhi, G. Prasad, D. Coyle, L. Behera, T. M. McGinnity. Quantum neural network-based EEG filtering for a brain–computer interface. IEEE Trans. Neural Netw. Learn. Syst., 2014, 25(2): 278

[13]

A.PeruzzoJ. McCleanP.Shadbolt, ., A variational eigenvalue solver on a quantum processor, arXiv: 1304.3061 (2013)

[14]

S. Wei, H. Li, G. Long. A full quantum eigensolver for quantum chemistry simulations. Research, 2020, 2020: 1486935

[15]

S.Ruder, An overview of gradient descent optimization algorithms, arXiv: 1609.04747 (2016)

[16]

J. R. McClean, J. Romero, R. Babbush, A. Aspuru-Guzik. The theory of variational hybrid quantumclassical algorithms. New J. Phys., 2016, 18(2): 023023

[17]

M.CerezoA. ArrasmithR.BabbushS.C. BenjaminS.Endo K.FujiiJ. R. McCleanK.MitaraiX.YuanL.Cincio P.J. Coles, Variational quantum algorithms, arXiv: 2012.09265 (2020)

[18]

S. Y. Hou, G. Feng, Z. Wu, H. Zou, W. Shi, J. Zeng, C. Cao, S. Yu, Z. Sheng, X. Rao, B. Ren, D. Lu, J. Zou, G. Miao, J. Xiang, B. Zeng. Spinq gemini: A desktop quantum computing platform for education and research. EPJ Quantum Technol., 2021, 8(1): 20

[19]

G. Yuan, T. Li, W. Hu. A conjugate gradient algorithm and its application in large-scale optimization problems and image restoration. J. Inequal. Appl., 2019, 2019(1): 247

[20]

C. G. Broyden. The convergence of a class of double-rank minimization algorithms (1): General considerations. IMA J. Appl. Math., 1970, 6(1): 76

[21]

R. Fletcher. A new approach to variable metric algorithms. Comput. J., 1970, 13(3): 317

[22]

D. Goldfarb. A family of variable-metric methods derived by variational means. Math. Comput., 1970, 24(109): 23

[23]

D. F. Shanno. Conditioning of quasi-Newton methods for function minimization. Math. Comput., 1970, 24(111): 647

[24]

P. Gao, K. Li, S. Wei, J. Gao, G. Long. Quantum gradient algorithm for general polynomials. Phys. Rev. A, 2021, 103(4): 042403

[25]

M.NielsenI. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, 2010

[26]

D. P. DiVincenzo. Quantum computation. Science, 1995, 270(5234): 255

[27]

J. Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2018, 2: 79

[28]

P.W. Shor, Algorithms for quantum computation: Discrete logarithms and factoring, in: Proceedings 35th Annual Symposium on Foundations of Computer Science, 1994, pp 124–134

[29]

T. Monz, D. Nigg, E. A. Martinez, M. F. Brandl, P. Schindler, R. Rines, S. X. Wang, I. L. Chuang, R. Blatt. Realization of a scalable Shor algorithm. Science, 2016, 351(6277): 1068

[30]

L. K. Grover. Quantum mechanics helps in searching for a needle in a haystack. Phys. Rev. Lett., 1997, 79(2): 325

[31]

G. L. Long. Grover algorithm with zero theoretical failure rate. Phys. Rev. A, 2001, 64(2): 022307

[32]

A. W. Harrow, A. Hassidim, S. Lloyd. Quantum algorithm for linear systems of equations. Phys. Rev. Lett., 2009, 103(15): 150502

[33]

B. Duan, J. Yuan, C. H. Yu, J. Huang, C. Y. Hsieh. A survey on HHLalgorithm: From theory to application in quantum machine learning. Phys. Lett. A, 2020, 384(24): 126595

[34]

S. P. Jordan. Fast quantum algorithm for numerical gradient estimation. Phys. Rev. Lett., 2005, 95(5): 050501

[35]

R.WiersemaD. LewisD.WierichsJ.CarrasquillaN.Killoran, Here comes the SU(n): multivariate quantum gates and gradients, arXiv: 2303.11355 (2023)

[36]

A.GilyénS.ArunachalamN.Wiebe, Optimizing quantum optimization algorithms via faster quantum gradient computation, in: Proceedings of the 2019 Annual ACM-SIAM Symposium on Discrete Algorithms, 2019, pp 1425–1444

[37]

J. Li. General explicit difference formulas for numerical differentiation. J. Comput. Appl. Math., 2005, 183(1): 29

[38]

W.H. PressS. A. TeukolskyW.T. Vetterling, Numerical Recipes in C, Cambridge University Press, 1992

[39]

Y.LiM.Dou, A quantum addition operation method, device, electronic device and storage medium, Origin Quantum (2021)

[40]

C. Zhu, R. H. Byrd, P. Lu, J. Nocedal. Algorithm 778: L-BFGS-B. ACM Trans. Math. Softw., 1997, 23(4): 550

[41]

G.Aleksandrowicz, ., Qiskit: An open-source framework for quantum computing, 2019

[42]

R. M. Parrish, E. G. Hohenstein, P. L. McMahon, T. J. Martínez. Quantum computation of electronic transitions using a variational quantum eigensolver. Phys. Rev. Lett., 2019, 122(23): 230401

[43]

A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J. M. Chow, J. M. Gambetta. Hardwareefficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 2017, 549(7671): 242

[44]

C. Hempel, C. Maier, J. Romero, J. McClean, T. Monz, H. Shen, P. Jurcevic, B. P. Lanyon, P. Love, R. Babbush, A. Aspuru-Guzik, R. Blatt, C. F. Roos. Quantum chemistry calculations on a trapped-ion quantum simulator. Phys. Rev. X, 2018, 8(3): 031022

[45]

K. Mitarai, M. Negoro, M. Kitagawa, K. Fujii. Quantum circuit learning. Phys. Rev. A, 2018, 98(3): 032309

[46]

M. Schuld, V. Bergholm, C. Gogolin, J. Izaac, N. Killoran. Evaluating analytic gradients on quantum hardware. Phys. Rev. A, 2019, 99(3): 032331

[47]

J. Lee, W. J. Huggins, M. Head-Gordon, K. B. Whaley. Generalized unitary coupled cluster wave functions for quantum computation. J. Chem. Theory Comput., 2019, 15(1): 311

[48]

D. Wecker, M. B. Hastings, M. Troyer. Progress towards practical quantum variational algorithms. Phys. Rev. A, 2015, 92(4): 042303

[49]

R. Wiersema, C. Zhou, Y. de Sereville, J. F. Carrasquilla, Y. B. Kim, H. Yuen. Exploring entanglement and optimization within the hamiltonian variational ansatz. PRX Quantum, 2020, 1(2): 020319

[50]

A.GilyénY.SuG.H. Low N.Wiebe, Quantum singular value transformation and beyond: exponential improvements for quantum matrix arithmetics, arXiv: 1806.01838 (2018)

[51]

G.H. LowI. L. Chuang, Hamiltonian simulation by qubitization, arXiv: 1610.06546 (2016)

[52]

A.M. ChildsN. Wiebe, Hamiltonian simulation using linear combinations of unitary operations, arXiv: 1202.5822 (2012)

[53]

L.Bottou, Large-scale machine learning with stochastic gradient descent, in: Proceedings of COMPSTAT’2010, edited by Y. Lechevallier and G. Saporta, Physica-Verlag HD, Heidelberg, 2010, pp 177–186

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (7124KB)

1409

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/