Adaptive learning with guaranteed stability for discrete-time recurrent neural networks

Hua Deng , Yi-hu Wu , Ji-an Duan

Journal of Central South University ›› 2007, Vol. 14 ›› Issue (5) : 685 -689.

PDF
Journal of Central South University ›› 2007, Vol. 14 ›› Issue (5) : 685 -689. DOI: 10.1007/s11771-007-0131-z
Article

Adaptive learning with guaranteed stability for discrete-time recurrent neural networks

Author information +
History +
PDF

Abstract

To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15 × 15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.

Keywords

recurrent neural networks / adaptive learning / nonlinear discrete-time systems / pattern recognition

Cite this article

Download citation ▾
Hua Deng,Yi-hu Wu,Ji-an Duan. Adaptive learning with guaranteed stability for discrete-time recurrent neural networks. Journal of Central South University, 2007, 14(5): 685-689 DOI:10.1007/s11771-007-0131-z

登录浏览全文

4963

注册一个新账户 忘记密码

References

AI Summary AI Mindmap
PDF

0

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/