Relative manifold based semi-supervised dimensionality reduction

Xianfa CAI, Guihua WEN, Jia WEI, Zhiwen YU

PDF(487 KB)
PDF(487 KB)
Front. Comput. Sci. ›› 2014, Vol. 8 ›› Issue (6) : 923-932. DOI: 10.1007/s11704-014-3193-8
RESEARCH ARTICLE

Relative manifold based semi-supervised dimensionality reduction

Author information +
History +

Abstract

A well-designed graph plays a fundamental role in graph-based semi-supervised learning; however, the topological structure of a constructed neighborhood is unstable in most current approaches, since they are very sensitive to the high dimensional, sparse and noisy data. This generally leads to dramatic performance degradation. To deal with this issue, we developed a relative manifold based semi-supervised dimensionality reduction (RMSSDR) approach by utilizing the relative manifold to construct a better neighborhood graph with fewer short-circuit edges. Based on the relative cognitive law and manifold distance, a relative transformation is used to construct the relative space and the relative manifold. A relative transformation can improve the ability to distinguish between data points and reduce the impact of noise such that it may be more intuitive, and the relative manifold can more truly reflect the manifold structure since data sets commonly exist in a nonlinear structure. Specifically, RMSSDR makes full use of pairwise constraints that can define the edge weights of the neighborhood graph by minimizing the local reconstruction error and can preserve the global and local geometric structures of the data set. The experimental results on face data sets demonstrate that RMSSDR is better than the current state of the art comparing methods in both performance of classification and robustness.

Keywords

cognitive law / relative transformation / relative manifold / local reconstruction / semi-supervised learning

Cite this article

Download citation ▾
Xianfa CAI, Guihua WEN, Jia WEI, Zhiwen YU. Relative manifold based semi-supervised dimensionality reduction. Front. Comput. Sci., 2014, 8(6): 923‒932 https://doi.org/10.1007/s11704-014-3193-8

References

[1]
Duda R O, Hart P E, Stork D G. Pattern Classification. New York: John Wiley & Sons, 2001, 566-581
[2]
Jolliffe I T. Principal Component Analysis (Springer Series in Statistics). 2nd ed. Springer, 2002
[3]
Martinez A M, Kak A C. PCA versus LDA. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(2): 228-233
CrossRef Google scholar
[4]
Roweis S, Saul L. Nonlinear dimensionality reduction by locally linear embedding. Science, 2000, 290: 2323-2326
CrossRef Google scholar
[5]
Belkin M, Niyogi P. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 2003, 15(6): 1373-1396
CrossRef Google scholar
[6]
He X F, Niyogi P. Locality preserving projections. Advances in Neural Information Processing Systems, 2004, 16: 153-160
[7]
Tenenbaum J B, Silva V D, Langford J C. A global geometric framework for nonlinear dimensionality reduction. Science, 2000, 290(5500): 2319-2323
CrossRef Google scholar
[8]
Zhang Z Y, Zha H Y. Principal manifolds and nonlinear dimensionality reduction via tangent space alignment. SIAM Journal of Scientific Computing, 2004, 26(1): 313-338
CrossRef Google scholar
[9]
Zhu X. Semi-Supervised Learning Literature. Technique Report 1530, Department of Computer Sciences, University of Wisconsin-Madison. 2008
[10]
Klein D, Kamvar S D, Manning C D. From instance-level constraints to space-level constraints: making the most of prior knowledge in data clustering. In: Proceedings of the 19th International Conference on Machine Learning. 2002, 307-314
[11]
Zhang D Q, Zhou Z H, Chen S C. Semi-supervised dimensionality reduction. In: Proceedings of the 7th SIAM International Conference on Data Mining. 2007, 629-634
[12]
Cevikalp H, Verbeek J, Jurie F, Klaser A. Semi-supervised dimensionality reduction using pairwise equivalence constraints. In: Proceedings of the 3rd International Conference on Computer Vision Theory and Applications. 2008
[13]
Wei J, Peng H. Neighbourhood preserving based semi-supervised dimensionality reduction. Electronics Letters, 2008, 44(20): 1190-1192
CrossRef Google scholar
[14]
Yan S C, Xu D, Zhang B Y, Zhang H J, Yang Q, Lin S. Graph embedding and extensions: a general framework for dimensionality reduction. IEEE Transactions on Pattern Analysis and Machine Intelligent, 2007, 28(1): 40-51
CrossRef Google scholar
[15]
Cai D, He X F, Han J W. Semi-supervised discriminate analysis. In: Proceedings of the 11th IEEE International Conference on Computer Vision. 2007
[16]
Belkin M, Niyogi P. Semi-supervised learning on riemannian manifolds. Machine Learning, 2004, 56(1): 209-239
CrossRef Google scholar
[17]
Zhang Z Y, Zha H Y, Zhang M. Spectral methods for semi-supervised manifold learning. In: Proceedings of the 26th IEEE Conference on Computer Vision and Pattern Recognition. 2008
[18]
Wen G, Lu T, Jiang L, Wen J. Locally linear embedding based on relative manifold. Journal of Software, 2009, 20(9): 2376-2386
CrossRef Google scholar
[19]
Georghiades A S, Belhumeur P N, Kriegman D J. From few to many: illumination cone models for face recognition under variable lighting and rose. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001, 23(6): 643-660
CrossRef Google scholar
[20]
Sim T, Barker S, Bsat M. The CMU pose, illumination, and expression database. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2003, 25(12): 1615-1618
CrossRef Google scholar
[21]
Wen G H, Jiang L J, Wen J, Shadbolt N R. Performing locally linear embedding with adaptable neighborhood size on manifold. In: Proceedings of the 9th Pacific Rim International Conference on Artificial Intelligence. 2006, 985-989
[22]
Zhang H, Berg A C, Maire M, Malik J. SVM-KNN: discriminative nearest neighbor classification for visual category recognition. In: Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2006, 2: 2126-2136
[23]
Bergman T J, Beehner J C, Cheney D L, Seyfarth R M. Hierarchical classification by rank and kinship in baboons. Science, 2003, 302(5648): 1234-1236
CrossRef Google scholar
[24]
Li DY, Liu C Y, Du Y, Han X. Artificial intelligence with uncertainty. Journal of Software, 2004, 15(11): 1583-1594 (in Chinese)
[25]
Wen G H, Jiang L J, Wen J, Wen J, Yu ZW. Perceptual relativity-based local hyperplane classification. Neurocomputing, 2012, 97: 155-163
CrossRef Google scholar
[26]
Wen G H, Pan X J, Jiang L J, Wen J. Modeling Gestalt laws for classification. In: Proceedings of the 9th IEEE International Conference on Cognitive Informatics. 2010, 914-918
[27]
Guihua W, Jun W, Li J. Perceptual nearest neighbors for classification. In: Proceedings of the 5th IEEE International Conference on Computing, Theories and Applications. 2010, 118-122
[28]
Wen G, Jiang L, Wen J. Local relative transformation with application to isometric embedding. Pattern Recognition Letters, 2009, 30(3): 203-211
CrossRef Google scholar
[29]
WEI Lai, WANG S J. Semi-supervised discriminant analysis based on manifold distance. Journal of Software, 2010, 21(10): 2445-2453
[30]
Guo G, Dyer, C R, Learning from examples in the small sample case: face expression recognition. IEEE Transactions on Systems, Man and Cybernetics, 2005, 35: 477-488
CrossRef Google scholar
[31]
Wen G H, Wei J, Wang J B, Zhou T G, Chen L. Cognitive gravitation model for classification on small noisy data. Neurocomputing, 2013, 118: 245-252
CrossRef Google scholar

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(487 KB)

Accesses

Citations

Detail

Sections
Recommended

/