On the learning dynamics of two-layer quadratic neural networks for understanding deep learning
Zhenghao TAN , Songcan CHEN
Front. Comput. Sci. ›› 2022, Vol. 16 ›› Issue (3) : 163313
On the learning dynamics of two-layer quadratic neural networks for understanding deep learning
Deep learning performs as a powerful paradigm in many real-world applications; however, its mechanism remains much of a mystery. To gain insights about nonlinear hierarchical deep networks, we theoretically describe the coupled nonlinear learning dynamic of the two-layer neural network with quadratic activations, extending existing results from the linear case. The quadratic activation, although rarely used in practice, shares convexity with the widely used ReLU activation, thus producing similar dynamics. In this work, we focus on the case of a canonical regression problem under the standard normal distribution and use a coupled dynamical system to mimic the gradient descent method in the sense of a continuous-time limit, then use the high order moment tensor of the normal distribution to simplify these ordinary differential equations. The simplified system yields unexpected fixed points. The existence of these non-global-optimal stable points leads to the existence of saddle points in the loss surface of the quadratic networks. Our analysis shows there are conserved quantities during the training of the quadratic networks. Such quantities might result in a failed learning process if the network is initialized improperly. Finally, We illustrate the comparison between the numerical learning curves and the theoretical one, which reveals the two alternately appearing stages of the learning process.
learning dynamic / quadratic network / ordinary differential equations
| [1] |
Collobert R, Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of International Conference on Machine Learning. 2008, 160–167 |
| [2] |
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems. 2012, 1106–1114 |
| [3] |
He K M, Zhang X Y, Ren S Q, Sun J. Deep residual learning for image recognition. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. 2016, 770–778 |
| [4] |
|
| [5] |
Li H, Xu Z, Taylor G, Studer C, Goldstein T. Visualizing the loss landscape of neural nets. In: Proceedings of Advances in Neural Information Processing Systems. 2018, 6391–6401 |
| [6] |
|
| [7] |
Yu B, Zhang J Z, Zhu Z X. On the learning dynamics of two-layer nonlinear convolutional neural networks. 2019, arXiv preprint arXiv:1905.10157 |
| [8] |
|
| [9] |
Livni R, Shalev-Shwartz S, Shamir O. On the computational efficiency of training neural networks. In: Proceedings of Advances in Neural Information Processing Systems. 2014, 855–863 |
| [10] |
Soltani M, Hegde C. Towards provable learning of polynomial neural networks using low-rank matrix estimation. In: Proceedings of International Conference on Artificial Intelligence and Statistics. 2018, 1417–1426 |
| [11] |
|
| [12] |
Saxe A M, McClelland J L, Ganguli S. Learning hierarchical categories in deep neural networks. In: Proceedings of Annual Meeting of the Cognitive Science Society. 2013, 35(35) |
| [13] |
Saxe A M, McClelland J L, Ganguli S. Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In: Proceedings of International Conference on Learning Representations. 2014 |
| [14] |
|
| [15] |
|
| [16] |
|
| [17] |
Zhang C Y, Bengio S, Hardt M, Recht B, Vinyals O. Understanding deep learning requires rethinking generalization. In: Proceedings of International Conference on Learning Representations. 2017 |
| [18] |
Kleinberg R, Li Y Z, Yuan Y. An alternative view: when does SGD escape local minima? In: Proceedings of International Conference on Machine Learning. 2018, 2703-2712 |
| [19] |
Advani M S, Saxe A M. High-dimensional dynamics of generalization error in neural networks. 2017, arXiv preprint arXiv:1710.03667 |
| [20] |
Neyshabur B, Tomioka R, Srebro N. In search of the real inductive bias: on the role of implicit regularization in deep learning. In: Proceedings of International Conference on Learning Representations. 2015 |
| [21] |
Pérez G V, Camargo C Q, Louis A A. Deep learning generalizes because the parameter-function map is biased towards simple functions. In: Proceedings of International Conference on Learning Representations. 2019 |
| [22] |
Du S S, Lee J D. On the power of over-parametrization in neural networks with quadratic activation. In: Proceedings of International Conference on Machine Learning. 2018, 1328-1337 |
| [23] |
Gamarnik D, Kizildag E C, Zadik I. Stationary points of shallow neural networks with quadratic activation function. 2019, arXiv preprint arXiv: 1912.01599 |
Higher Education Press
/
| 〈 |
|
〉 |