The modified natural power method for principal component computation

Tian-ping Chen, Shi-zhao Ma

Front. Math. China ›› 2006, Vol. 1 ›› Issue (2) : 234-251.

PDF(1325 KB)
PDF(1325 KB)
Front. Math. China ›› 2006, Vol. 1 ›› Issue (2) : 234-251. DOI: 10.1007/s11464-006-0005-y
Research Article

The modified natural power method for principal component computation

Author information +
History +

Abstract

A modified version of the natural power method (NP) for fast estimation and tracking of the principal eigenvectors of a vector sequence is Presented. It is an extension of the natural power method because it is a solution to obtain the principal eigenvectors and not only for tracking of the principal subspace. As compared with some power-based methods such as Oja method, the projection approximation subspace tracking (PAST) method, and the novel information criterion (NIC) method, the modified natural power method (MNP) has the fastest convergence rate and can be easily implemented with only O(np) flops of computation at each iteration, where n is the dimension of the vector sequence and p is the dimension of the principal subspace or the number of the principal eigenvectors. Furthermore, it is guaranteed to be globally and exponentially convergent in contrast with some non-power-based methods such as MALASE and OPERA.

Keywords

PCA / MCA / power-based / exponentially convergent / 93B40 / 93C05

Cite this article

Download citation ▾
Tian-ping Chen, Shi-zhao Ma. The modified natural power method for principal component computation. Front. Math. China, 2006, 1(2): 234‒251 https://doi.org/10.1007/s11464-006-0005-y

References

[1.]
Comon P., Golub G.H. Tracking a few extreme singular values and vectors in signal processing. Proc of the IEEE, 1990, 78(8): 1327-1343.
CrossRef Google scholar
[2.]
Oja E. A simplified neuron model as a principal component analyzer. J. Math. Biology, 1982, 15: 267-273.
CrossRef Google scholar
[3.]
Kung S. Y., Diamantaras K. I., Taur J. S. Adaptive principal component extraction (APEX) and applications. IEEE Trans on Signal Processing, 1994, 42(5): 1202-1217.
CrossRef Google scholar
[4.]
Yan W. Y., Helmke U., Moore J. B. Global analysis of Oja’s flow for neural networks. IEEE Trans on Neural Networks, 1994, 5(5): 674-683.
CrossRef Google scholar
[5.]
Chen T., Hua Y., Yan W. Global convergence of Oja’s subspace algorithm for principal component extraction. IEEE Trans on Neural Networks, 1998, 9(1): 58-67.
CrossRef Google scholar
[6.]
Xu L. Least mean square error reconstruction principle for self-organizing neural nets. Neural Networks, 1993, 6: 627-648.
CrossRef Google scholar
[7.]
Yang B. Projection approximation subspace tracking. IEEE Trans on Signal Processing, 1995, 43(1): 95-107.
CrossRef Google scholar
[8.]
Miao Y., Hua Y. Fast subspace tracking and neural network learning by a novel information criterion. IEEE Trans on Signal Processing, 1998, 46(7): 1967-1979.
CrossRef Google scholar
[9.]
Golub G. H., Van Loan C. F. Matrix Computations, 1989, Baltimore, MD: Johns Hopkins Univ Press.
[10.]
Hua Y., Xiang Y., Chen T., Abed-Meraim K. and Miao Y., A New Look at the Power Method for Fast Subspace Tracking, Digital Signal Processing, Academic Press (Short version is available in Proc. of IEEE Workshop on Neural Networks for Signal Processing, April 1999), Oct. 1999
[11.]
Riou C. and Chonavel T., Fast adaptive eigenvalue decomposition: a maximum likelihood approach, Proc. of IEEE ICASSP’97, 1997: 3565–3568
[12.]
MacInnes C. S., Fast accurate subspace tracking using operator restriction analysis, Proc. of IEEE ICASSP’98, 1998: 1357–1360
[13.]
Chen T. P., Amari S., Lin Q. A Unified approach for Principal and Minor Components Extraction. Neural Networks, 1998, 11: 385-390.
CrossRef Google scholar
[14.]
Yang B. Projection approximation subspace tracking. IEEE Trans on Signal Processing, 1995, 43(1): 95-107.
CrossRef Google scholar
AI Summary AI Mindmap
PDF(1325 KB)

Accesses

Citations

Detail

Sections
Recommended

/