A Unified Neural Kernel for Neural Network Learning
Shao-Qun Zhang , Yong-Ming Tian , Zong-Yi Chen , Xun Lu
In recent decades, there has been great interest in the distinction and connection between neural network learning and kernel learning. Recent advances have made theoretical progress in connecting infinite-width neural networks to Gaussian processes. Two predominant approaches have emerged: the Neural Network Gaussian Process (NNGP) and the Neural Tangent Kernel (NTK). The former, rooted in Bayesian inference, represents a zero-order kernel, while the latter, grounded in the tangent space of gradient descents, is a first-order kernel. In this paper, we present the Unified Neural Kernel (UNK), which is induced by the inner product of post-activated variables and characterizes the learning dynamics of neural networks with gradient descents and parameter initialization. The proposed UNK kernel maintains the limiting properties of both NNGP and NTK, exhibiting behaviors akin to NTK with a finite learning step and converging to NNGP as the learning step approaches infinity. Besides, we also theoretically characterize the uniform tightness and learning convergence of the UNK kernel, providing comprehensive insights into this unified kernel. Experimental results underscore the effectiveness of our proposed method.
Neural Network Learning / Unified Neural Kernel / Neural Network Gaussian Process / Neural Tangent Kernel / Gradient Descent / Parameter Initialization / Uniform Tightness / Convergence
Higher Education Press 2026
/
| 〈 |
|
〉 |