Feature selection on probabilistic symbolic objects

Djamal ZIANI

PDF(436 KB)
PDF(436 KB)
Front. Comput. Sci. ›› 2014, Vol. 8 ›› Issue (6) : 933-947. DOI: 10.1007/s11704-014-3359-4
RESEARCH ARTICLE

Feature selection on probabilistic symbolic objects

Author information +
History +

Abstract

In data analysis tasks, we are often confronted to very high dimensional data. Based on the purpose of a data analysis study, feature selection will find and select the relevant subset of features from the original features. Many feature selection algorithms have been proposed in classical data analysis, but very few in symbolic data analysis (SDA) which is an extension of the classical data analysis, since it uses rich objects instead to simple matrices. A symbolic object, compared to the data used in classical data analysis can describe not only individuals, but also most of the time a cluster of individuals. In this paper we present an unsupervised feature selection algorithm on probabilistic symbolic objects (PSOs), with the purpose of discrimination. A PSO is a symbolic object that describes a cluster of individuals by modal variables using relative frequency distribution associated with each value. This paper presents new dissimilarity measures between PSOs, which are used as feature selection criteria, and explains how to reduce the complexity of the algorithm by using the discrimination matrix.

Keywords

symbolic data analysis / feature selection / probabilistic symbolic object / discrimination criteria / data and knowledge visualization

Cite this article

Download citation ▾
Djamal ZIANI. Feature selection on probabilistic symbolic objects. Front. Comput. Sci., 2014, 8(6): 933‒947 https://doi.org/10.1007/s11704-014-3359-4

References

[1]
Billard L, Diday E. Symbolic data analysis. John Wiley & Sons, Ltd., 2006
CrossRef Google scholar
[2]
Diday E, Esposito F. An introduction to symbolic data analysis and the SODAS software. Intelligent Data Analysis, 2003, 7(6): 583−601
[3]
Diday E. Probabilist, possibilist and belief objects for knowledge analysis. Annals of Operations Research, 1995, 55(2): 227−276
CrossRef Google scholar
[4]
Ziani D. Sélection de variables sur un ensemble d’objets symboliques: traitement des dépendances entre variables. Paris: University of Paris Dauphine, Dissertation for the Doctoral Degree 1996 (in French)
[5]
Lebbe J. Représentation des concepts en biologie et en médecine. Dissertation for the Doctoral Degree, 1991 (in French)
[6]
Bock H H, Diday E. Analysis of symbolic data: exploratory methods for extracting statistical information from complex data. Springer, 2000, 389−391
CrossRef Google scholar
[7]
Ziani D. Feature selection on Boolean symbolic objects. International Journal of Computer Science & Information Technology, 2013, 5(6): 1
CrossRef Google scholar
[8]
Malerba D, Esposito F, Monopoli M. Comparing dissimilarity measures for probabilistic symbolic objects. Data mining III, Series Management Information Systems, 2002, 6: 31−40
[9]
Rached Z, Alajaji F, Campbell L L. Rényi’s divergence and entropy rates for finite alphabet Markov sources. IEEE Transactions on Information Theory, 2001, 47(4): 1553−1561
CrossRef Google scholar
[10]
Kullback S, Leibler R A. On information and sufficiency. Annals of Mathematical Statistics, 1951, 22(1): 79−86
CrossRef Google scholar
[11]
Beirlant J, Devroye L, Györfi L, Vajda I. Large deviations of divergence measures on partitions. Journal of Statistical Planning and Inference, 2001, 93(1): 1−16
CrossRef Google scholar
[12]
Ziani D, Khalil Z, Vignes R. Recherche de sous-ensembles minimaux de variables à partir d’objets symboliques. In: Proceedings of the 5th èmes Journées “Symbolique-Numérique”. 1994, 794−799 (in French)
[13]
Esposito F, Malerba D, Appice A. Dissimilarity and matching. Symbolic Data Analysis and the SODAS Software, 2008, 61−66
[14]
Frank A, Asuncion A. Uci machine learning repository. irvine, ca: University of california. School of Information and Computer Science, 2010, 213
[15]
Browne C, Düntsch I, Gediga G. Iris revisited: a comparison of discriminant and enhanced rough set data analysis. Rough Sets in Knowledge Discovery 2, 1998, 19: 345−368
[16]
Dash M, Choi K, Scheuermann P, Liu H. Feature selection for clustering —a filter solution. In: Proceedings of the 2002 IEEE International Conference on Data Mining. 2002, 115−122
[17]
Dy J G, Brodley C E. Feature selection for unsupervised learning. The Journal of Machine Learning Research, 2004, 5: 845−889

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(436 KB)

Accesses

Citations

Detail

Sections
Recommended

/