An extended SHESN with leaky integrator neuron and inhibitory connection for Mackey-Glass prediction

Bo YANG, Zhidong DENG

PDF(305 KB)
PDF(305 KB)
Front. Electr. Electron. Eng. ›› 2012, Vol. 7 ›› Issue (2) : 200-207. DOI: 10.1007/s11460-011-0176-5
RESEARCH ARTICLE
RESEARCH ARTICLE

An extended SHESN with leaky integrator neuron and inhibitory connection for Mackey-Glass prediction

Author information +
History +

Abstract

Echo state network (ESN) proposed by Jaeger in 2001 has remarkable capabilities of approximating dynamics for complex systems, such as Mackey-Glass problem. Compared to that of ESN, the scale-free highly-clustered ESN, i.e., SHESN, which state reservoir has both small-world phenomenon and scale-free feature, exhibits even stronger approximation capabilities of dynamics and better echo state property. In this paper, we extend the state reservoir of SHESN using leaky integrator neurons and inhibitory connections, inspired from the advances in neurophysiology. We apply the extended SHESN, called e-SHESN, to the Mackey-Glass prediction problem. The experimental results show that the e-SHESN considerably outperforms the SHESN in prediction capabilities of the Mackey-Glass chaotic time-series. Meanwhile, the interesting complex network characteristic in the state reservoir, including the small-world property and the scale-free feature, remains unchanged. In addition, we unveil that the original SHESN may be unstable in some cases. However, the proposed e-SHESN model is shown to be able to address the flaw through the enhancement of the network stability. Specifically, by using the ridge regression instead of the linear regression, the stability of e-SHESN could be much more largely improved.

Keywords

echo state network (ESN) / e-SHESN / Mackey-Glass problem / small-world phenomenon / scale-free distribution / ridge regression

Cite this article

Download citation ▾
Bo YANG, Zhidong DENG. An extended SHESN with leaky integrator neuron and inhibitory connection for Mackey-Glass prediction. Front Elect Electr Eng, 2012, 7(2): 200‒207 https://doi.org/10.1007/s11460-011-0176-5

References

[1]
Seung H S. Learning in spiking neural networks by reinforcement of stochastic synaptic transmission. Neuron, 2003, 40(6): 1063-1073
CrossRef Pubmed Google scholar
[2]
Izhikevich E M. Simple model of spiking neurons. IEEE Transactions on Neural Networks, 2003, 14(6): 1569-1572
CrossRef Pubmed Google scholar
[3]
Pavlidis N G, Tasoulis D K, Plagianakos V P, Vrahatis M N. Spiking neural network training using evolutionary algorithms. In: Proceedings of IEEE International Joint Conference on Neural Networks. 2005, 4: 2190-2194
[4]
Oja M, Kaski S, Kohonen T. Bibliography of self-organizing map (SOM) papers: 1998-2001 addendum. Neural Computing Surveys, 2002, 3(1): 1-156
[5]
Kohonen T. Self-Organization and Associative Memory. 3rd ed. New York, NY: Springer-Verlag, 1989
[6]
Bodén M. A guide to recurrent neural networks and backpropagation. SICS Technical Report T2002:03, 2002
[7]
Jaeger H. The “echo state” approach to analyzing and training recurrent neural networks. GMD Technical Report 148, 2001
[8]
Maass W, Natschläger T, Markram H. Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation, 2002, 14(11): 2531-2560
CrossRef Pubmed Google scholar
[9]
Jaeger H, Haas H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science, 2004, 304(5667): 78-80
CrossRef Pubmed Google scholar
[10]
Fette G, Eggert J. Short term memory and pattern matching with simple echo state networks. Lecture notes in Computer Science, 2005, 3696: 13-18
[11]
Jaeger H. A tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD Report 159, 2002
[12]
Mackey M C, Glass L. Oscillation and chaos in physiological control systems. Science, 1977, 197(4300): 287-289
CrossRef Pubmed Google scholar
[13]
Deng Z D, Zhang Y. Collective behavior of a small-world recurrent neural system with scale-free distribution. IEEE Transactions on Neural Networks, 2007, 18(5): 1364-1375
CrossRef Pubmed Google scholar
[14]
Jaeger H. Short term memory in echo state networks. GMD Technical Report 152, 2002
[15]
Barabasi A L, Albert R. Emergence of scaling in random networks. Science, 1999, 286(5439): 509-512
CrossRef Pubmed Google scholar
[16]
Medina A, Matta I, Byers J. On the origin of power laws in Internet topologies. ACM SIGCOMM Computer Communication Review, 2000, 30(2): 18-28
CrossRef Google scholar
[17]
Watts D J, Strogatz S H. Collective dynamics of ‘small-world’ networks. Nature, 1998, 393(6684): 440-442
CrossRef Pubmed Google scholar
[18]
Crovella M, Harchol-Balter M, Murta C. Task assignment in a distributed system: Improving performance by unbalancing load. In: Proceedings of ACM Conference on Measurement and Modeling of Computer Systems. 1998, 268-269
[19]
Dayan P, Abbott L F. Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. Cambridge: The MIT Press, 2001
[20]
Kohonen T. Self-Organization and Associative Memory. 3rd ed. New York, NY: Springer-Verlag, 1989
[21]
Connors B W, Gutnick M J. Intrinsic firing patterns of diverse neocortical neurons. Trends in Neurosciences, 1990, 13(3): 99-104
CrossRef Pubmed Google scholar
[22]
Abbott L F. Lapicque’s introduction of the integrate-and-fire model neuron (1907). Brain Research Bulletin, 1999, 50(5-6): 303-304
CrossRef Pubmed Google scholar
[23]
Wyffels F, Schrauwen B, Stroobandt D. Stable output feedback in reservoir computing using ridge regression. Lecture Notes in Computer Science, 2008, 5163: 808-817

Acknowledgements

The authors would like to thank Zhenbo CHENG for helpful discussion. This work was supported in part by the National Natural Science Foundation of China (Grant Nos. 90820305 and 60775040).

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
PDF(305 KB)

Accesses

Citations

Detail

Sections
Recommended

/