Error Analysis on Hérmite Learning with Gradient Data

Baohuai Sheng , Jianli Wang , Daohong Xiang

Chinese Annals of Mathematics, Series B ›› 2018, Vol. 39 ›› Issue (4) : 705 -720.

PDF
Chinese Annals of Mathematics, Series B ›› 2018, Vol. 39 ›› Issue (4) : 705 -720. DOI: 10.1007/s11401-018-0091-7
Article

Error Analysis on Hérmite Learning with Gradient Data

Author information +
History +
PDF

Abstract

This paper deals with Hérmite learning which aims at obtaining the target function from the samples of function values and the gradient values. Error analysis is conducted for these algorithms by means of approaches from convex analysis in the framework of multi-task vector learning and the improved learning rates are derived.

Keywords

Hérmite learning / Gradient learning / Learning rate / Convex analysis / Multitask learning / Differentiable reproducing kernel Hilbert space

Cite this article

Download citation ▾
Baohuai Sheng, Jianli Wang, Daohong Xiang. Error Analysis on Hérmite Learning with Gradient Data. Chinese Annals of Mathematics, Series B, 2018, 39(4): 705-720 DOI:10.1007/s11401-018-0091-7

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Zhou D. X.. Derivative reproducing properties for kernel methods in learning theory. Journal of Compu-tational and Applied Mathematics, 2008, 220: 456-463

[2]

Shi L., Guo X., Zhou D. X.. Hérmite learning with gradient data. Journal of Computational and Applied Mathematics, 2010, 233: 3046-3056

[3]

Mukherjee S., Wu Q., Zhou D. X.. Learning gradients on manifolds. Bernoulli, 2010, 16(1): 181-207

[4]

Ying Y. M., Wu Q., Campbell C.. Learning the coordinate gradients. Advances in Computational Mathematics, 2012, 37(3): 355-378

[5]

Cucker F., Smale F.. On the mathematical foundations of learning theory. Bull. Amer. Math., 2001, 39: 1-49

[6]

Cucker F., Zhou D. X.. Learning Theory: An Approximation Theory Viewpoint, 2007

[7]

Aronszajn N.. Theory of reproducing kernels. Transactions of the American Mathematical Society, 1950, 68: 337-404

[8]

Belkin M., Niyogi P., Sindhwani V.. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 2006, 7: 2399-2434

[9]

Smale S., Zhou D. X.. Estimating the approximation error in learning theory. Analysis and Applica-tions, 2003, 1(1): 17-41

[10]

Solnon M., Arlot S., Bach F.. Multi-task regression using minimal penalities. Journal of Machine Learning Research, 2012, 13: 2773-2812

[11]

Caponnetto A., Micchelli C. A., Pontil M., Ying Y. M.. Uinversal multi-task kernels. Journal of Machine Learning Research, 2008, 9: 1615-1646

[12]

Wang J. Y., Bensmail H., Gao X.. Feature selection and multi-kernel learning for sparse representation on a manifold. Neural Networks, 2014, 51: 9-16

[13]

Evgeniou T., Micchelli C. A., Pontil M.. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 2005, 6: 615-637

[14]

Zhang H. Z., Zhang J.. Vector-valued reproducing kernel Banach spaces with applications to multitask learning. Journal of Complexity, 2013, 29(2): 195-215

[15]

Jord˜ao J., Menegatto V. A.. Reproducing properties of differentiable Mercer-like kernels on the sphere. Numerical Functional Analysis and Optimization, 2012, 33(10): 1221-1243

[16]

Ferreira J. C., Menegatto V. A.. Reproducing properties of differentiable Mercer-like kernels. Mathe-matische Nachrichten, 2012, 285(8–9): 959-973

[17]

Jord˜ao T., Menegatto V. A.. Weighted Fourier-Laplace transforms in reproducing kernel Hilbert spaces on the sphere. Journal of Mathematical Analysis and Applications, 2014, 411: 732-741

[18]

Sun H. W., Wu Q.. A note on application of integral operator in learning. Applied and Computational Harmonic Analysis, 2009, 26: 416-421

[19]

Smale S., Zhou D. X.. Learning theory estimates via integral operators and their applications. Con-structive Approximation, 2007, 26: 153-172

[20]

Sheng B. H., Ye P. X.. The learning rates of regularized regression based on reproducing kernel Banach spaces, 2013

[21]

Sheng B. H.. The convergence rates of Shannon sampling learning algorithms. Sciences in China: Math-ematics, 2012, 55(6): 1243-1256

[22]

Christmann A., Steinwart I.. Consistency and robustness of kernel-based regression in convex risk minimization. Bernoulli, 2007, 13(3): 799-819

[23]

Steinwart I.. Sparseness of support vector machines. Journal of Machine Learning Research, 2003, 4: 1071-1105

[24]

Multiresolution and Information Processing, 2015, 13 4

[25]

Bauschke H. H., Combettes P. L.. Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2010

[26]

Cucker F., Smale S.. Best choices for regularization parameters in learning theory: On the biasvariance problem. Foundation of Computational Mathematics, 2002, 2: 413-428

AI Summary AI Mindmap
PDF

110

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/