Dimensionality reduction via kernel sparse representation

Zhisong PAN, Zhantao DENG, Yibing WANG, Yanyan ZHANG

PDF(392 KB)
PDF(392 KB)
Front. Comput. Sci. ›› 2014, Vol. 8 ›› Issue (5) : 807-815. DOI: 10.1007/s11704-014-3317-1
RESEARCH ARTICLE

Dimensionality reduction via kernel sparse representation

Author information +
History +

Abstract

Dimensionality reduction (DR) methods based on sparse representation as one of the hottest research topics have achieved remarkable performance in many applications in recent years. However, it’s a challenge for existing sparse representation based methods to solve nonlinear problem due to the limitations of seeking sparse representation of data in the original space. Motivated by kernel tricks, we proposed a new framework called empirical kernel sparse representation (EKSR) to solve nonlinear problem. In this framework, nonlinear separable data are mapped into kernel space in which the nonlinear similarity can be captured, and then the data in kernel space is reconstructed by sparse representation to preserve the sparse structure, which is obtained by minimizing a 1 regularization-related objective function. EKSR provides new insights into dimensionality reduction and extends two models: 1) empirical kernel sparsity preserving projection (EKSPP), which is a feature extraction method based on sparsity preserving projection (SPP); 2) empirical kernel sparsity score (EKSS), which is a feature selection method based on sparsity score (SS). Both of the two methods can choose neighborhood automatically as the natural discriminative power of sparse representation. Compared with several existing approaches, the proposed framework can reduce computational complexity and be more convenient in practice.

Keywords

feature extraction / feature selection / sparse representation / kernel trick

Cite this article

Download citation ▾
Zhisong PAN, Zhantao DENG, Yibing WANG, Yanyan ZHANG. Dimensionality reduction via kernel sparse representation. Front. Comput. Sci., 2014, 8(5): 807‒815 https://doi.org/10.1007/s11704-014-3317-1

References

[1]
Marcellin M, Gormish M, Bilgin A, Boliek M. An overview of JPEG-2000. In: Proceedings of the 2000 IEEE Data Compression Conference. 2000, 523-541
CrossRef Google scholar
[2]
Elad M, Aharon M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Process, 2006, 15(12): 3736-3745
CrossRef Google scholar
[3]
Marial J, Bach F, Ponce J, Sapiro J, Zisserman A. Non-local sparse models for image restoration. In: Proceedings of the 12th IEEE International Conference on Computer Vision. 2009, 2272-2279
[4]
Huang K, Aviyente S. Sparse representation for signal classification. In: Proceedings of Advances in Neural Information Processing Systems. 2006, 609-616
[5]
Davenport M, Duarte M, Wakin M, Takhar D, Kelly K, Baraniuk R. The smashed filter for compressive classification and target recognition. In: Proceedings of IS&T/SPIE Symposium on Electronic Imaging: Computational Imaging. 2007, 64980H-64980H-12
[6]
Donoho D. For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution. Communications On Pure and Applied Mathematics, 2006, 59(6): 797-829
CrossRef Google scholar
[7]
Scholkopf B, Smola A, Muller K R. Kernel, principal component analysis. In: Proceedings of the 1997 International Conference on Artificial Neural Networks. 1997, 583-588
[8]
Qiao L, Chen S, Tan X. Sparsity preserving projections with applications to face recognition. Pattern Recognition, 2010, 43(1): 331-341
CrossRef Google scholar
[9]
Turk M, Pentland A. Eigenfaces for Recognition. Journal of Cognitive Neuroscience, 1991, 3(1): 71-86
CrossRef Google scholar
[10]
He X F, Niyogi P. Locality preserving projections. In: Proceedings of Advances in Neural Information Processing Systems. 2003, 16: 234-241
[11]
Yang Y, Nie F, Xiang S, Zhuang Y, Wang W. Local and global regressive mapping for manifold learning with out-of-sample extrapolation. In: Proceedings of the 24th AAAI Conference on Artificial Intelligence. 2010, 649-654
[12]
Belhumeur P N, Hespanha J P, Kriegman D J. Eigenfaces vs. fisherfaces: recognition using class specific linear projection. In: Proceedings of the 1997 IEEE Transactions on Pattern Analysis and Machine Intelligence. 1997, 19(7): 711-720
CrossRef Google scholar
[13]
Xu D, Yan S C, Tao D C, Lin S, Zhang H J. Marginal fisher analysis and its variants for human gait recognition and content based image retrieval. In: Proceedings of the 2007 IEEE Transactions on Image Processing. 2007, 16(11): 2811-2821
CrossRef Google scholar
[14]
Li H F, Jiang T, Zhang K S. Efficient and robust feature extraction by maximum margin criterion. In: Proceedings of the 2006 IEEE Transactions on Neural Networks. 2006, 17(1): 157-165
CrossRef Google scholar
[15]
Liu J, Chen S C, Tan X Y, Zhang D Q. Comments on “Efficient and robust feature extraction by maximum margin criterion”. In: Proceedings of the 2007 IEEE Transactions on Neural Networks. 2007, 18(6): 1862-1864
CrossRef Google scholar
[16]
Zhang D Q, Zhou Z H, Chen S C. Semi-supervised dimensionality reduction. In: Proceedings of the 2007 International Conference on Data Mining. 2007, 629-634
CrossRef Google scholar
[17]
Cai D, He X F, Han J W. Semi-supervised discriminant analysis. In: Proceedings of the 11th IEEE International Conference on Computer Vision. 2007, 1-7
[18]
Sugiyama M, Ide T, Nakajima S, Sese J. Semi-supervised local fisher discriminant analysis for dimensionality reduction. Machine Learning, 2008, 78(1-2): 35-61
[19]
Qiao L, Zhang L, Chen S. Dimensionality reduction with adaptive graph. Frontiers of Computer Science, 2013, 7(5): 745-753
CrossRef Google scholar
[20]
Bishop C M. Neural Networks for Pattern Recognition. Oxford: Oxford University Press, 1995
[21]
He X, Cai D, Niyogi P. Laplacian score for feature selection. In: Proceedings of the Advances in Neural Information Processing Systems. 2005, 17: 507-514
[22]
Liu M X, Sun D, Zhang D Q. Sparsity score: a new filter feature selection method based on L1 graph. In: Proceedings of the 21st International Conference on Pattern Recognition. 2012, 11-15
[23]
Yang Y, Ma Z, Hauptmann A, Sebe N. Feature selection for multimedia analysis by sharing information among multiple tasks. In: Proceedings of the 2013 IEEE Transactions on Multimedia. 2013, 15(3): 661-669
CrossRef Google scholar
[24]
Ma Z , Nie F P, Yang Y, Uijlings J R R, Sebe N. Web image annotation via subspace-sparsity collaborated feature selection. In: Proceedings of the 2012 IEEE Transactions on Multimedia. 2012, 14(4): 1021-1030
CrossRef Google scholar
[25]
Yang Y, Shen H T, Ma Z, Huang Z, Zhou X F. ℓ2,1-norm regularized discriminative feature selection for unsupervised learning. In: Proceedings of the 22nd International Joint Conference on Artificial Intelligence. 2011(2): 1589-1594
[26]
Zhang D, Chen S, Zhou Z. Constraint score: a new filter method for feature selection with pairwise constraints. Pattern Recognition, 2008, 41(5): 1440-1451
CrossRef Google scholar
[27]
Zhao Z, Liu H. Semi-supervised feature selection via spectral analysis. In: Proceedings of the 7th SIAM International Conference on Data Mining. 2007, 641-646
[28]
Gao S, Tsang I W H, Chia L T. Kernel sparse representation for image classification and face recognition. In: Proceedings of the 11th European Conference on Computer Vision. 2010, 1-14
[29]
Zhang L, Zhou W, Chang P, Liu J, Yan Z, Wang T, Li F. Kernel sparse representation-based classifier. In: Proceedings of the 2012 IEEE Transactions on Signal Processing. 2012, 1684-1695
[30]
Yin J, Liu Z, Jin Z, Yang W. Kernel sparse representation based classification. Neurocomputing, 2012, 77(1): 120-128
CrossRef Google scholar
[31]
Chen Y, Nasser N M, Tran T D. Hyperspectral image classification via kernel sparse representation. In: Proceedings of the 2013 IEEE Transactions on Geoscience and Remote Senseing. 2013, 51(1): 217-231
[32]
Xiong H, Swamy M N S, Ahmad M O. Optimizing the kernel in the empirical feature space. In: Proceedings of the 2005 IEEE Transactions on Neural Networks. 2005, 16: 460-474
CrossRef Google scholar
[33]
Wang Z, Chen S, Sun T. MultiK-MHKS: a novel multiple kernel learning algorithm. In: Proceedings of the 2008 IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008, 30(2): 348-353
CrossRef Google scholar
[34]
Donoho D. Compressed sensing. In: Proceedings of the 2006 IEEE Transactions on Information Theory. 2006, 52(4): 1289-1306
CrossRef Google scholar
[35]
Shawe-Taylor J, Cristianini N. Kernel Methods for Pattern Analysis. Cambridge: Cambridge University Press, 2004
CrossRef Google scholar
[36]
Martinez M, Kak A C. PCA versus LDA. In: Proceedings of the 2001 IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001, 23(2): 228-233
CrossRef Google scholar
[37]
Wright J, Yang A, Sastry S, Ma Y. Robust face recognition via sparse representation. In: Proceedings of the 2009 IEEE Transactions on Pattern Analysis and Machine Intelligence. 2009, 31(2): 210-227
CrossRef Google scholar
[38]
Tsang I, Kocsor A, Kwok J. Efficient kernel feature extraction for massive data sets. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and DataMining. 2006, 724-729
CrossRef Google scholar
[39]
Yuan M, Lin Y. Model selection and estimation in regression with grouped variables. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 2006, 68(1): 49-67
CrossRef Google scholar
[40]
Liu J, Ye J. Moreau-yosida regularization for grouped tree structure learning. In: Proceedings of Advances in Neural Information Processing Systems. 2010, 23: 1459-1467
[41]
Jacob L, Obozinski G, Vert J P. Group lasso with overlap and graph lasso. In: Proceedings of the 26th ACM International Conference on Machine Learning. 2009, 433-440

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(392 KB)

Accesses

Citations

Detail

Sections
Recommended

/