Improving bagging performance through multi-algorithm ensembles

Kuo-Wei HSU, Jaideep SRIVASTAVA

PDF(692 KB)
PDF(692 KB)
Front. Comput. Sci. ›› 2012, Vol. 6 ›› Issue (5) : 498-512. DOI: 10.1007/s11704-012-1163-6
RESEARCH ARTICLE

Improving bagging performance through multi-algorithm ensembles

Author information +
History +

Abstract

Working as an ensemble method that establishes a committee of classifiers first and then aggregates their outcomes through majority voting, bagging has attracted considerable research interest and been applied in various application domains. It has demonstrated several advantages, but in its present form, bagging has been found to be less accurate than some other ensemble methods. To unlock its power and expand its user base, we propose an approach that improves bagging through the use of multi-algorithm ensembles. In a multi-algorithm ensemble, multiple classification algorithms are employed. Starting from a study of the nature of diversity, we show that compared to using different training sets alone, using heterogeneous algorithms together with different training sets increases diversity in ensembles, and hence we provide a fundamental explanation for research utilizing heterogeneous algorithms. In addition, we partially address the problem of the relationship between diversity and accuracy by providing a non-linear function that describes the relationship between diversity and correlation. Furthermore, after realizing that the bootstrap procedure is the exclusive source of diversity in bagging, we use heterogeneity as another source of diversity and propose an approach utilizing heterogeneous algorithms in bagging. For evaluation, we consider several benchmark data sets from various application domains. The results indicate that, in terms of F1-measure, our approach outperformsmost of the other state-of-the-art ensemble methods considered in experiments and, in terms of mean margin, our approach is superior to all the others considered in experiments.

Keywords

bagging / classification / diversity / ensemble

Cite this article

Download citation ▾
Kuo-Wei HSU, Jaideep SRIVASTAVA. Improving bagging performance through multi-algorithm ensembles. Front Comput Sci, 2012, 6(5): 498‒512 https://doi.org/10.1007/s11704-012-1163-6

References

[1]
Rokach L. Pattern Classification Using Ensemble Methods. Hackensack: World Scientific Pub. Co. Inc, 2010
[2]
Breiman L. Bagging predictors. Machine learning, 1996, 24(2): 123-140
CrossRef Google scholar
[3]
Pinheiro C A R, Evsukoff A, Ebecken N F F. Revenue recovering with insolvency prevention on a Brazilian telecom operator. ACM SIGKDD Explorations Newsletter, 2006, 8(1): 65-70
CrossRef Google scholar
[4]
Lemmens A, Croux C. Bagging and boosting classification trees to predict churn. Journal of Marketing Research, 2006, 43(2): 276-286
CrossRef Google scholar
[5]
Perlich C, Rosset S, Lawrence R D, Zadrozny B. High-quantile modeling for customer wallet estimation and other applications. In: Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2007, 977-985
CrossRef Google scholar
[6]
Lasota T, Telec Z, Trawiński B, Trawiński K. A multi-agent system to assist with real estate appraisals using bagging ensembles. In: Proceedings of the 1st International Conference on Computational Collective Intelligence-Semantic Web, Social Networks and Multiagent Systems. 2009, 813-824
[7]
Paleologo G, Elisseeff A, Antonini G. Subagging for credit scoring models. European Journal of Operational Research, 2010, 201(2): 490-499
CrossRef Google scholar
[8]
Hothorn T, Lausen B. Bagging tree classifiers for laser scanning images: a data-and simulation-based strategy. Artificial Intelligence in Medicine, 2003, 27(1): 65-79
CrossRef Google scholar
[9]
Nunkesser R, Bernholt T, Schwender H, Ickstadt K, Wegener I. Detecting high-order interactions of single nucleotide polymorphisms using genetic programming. Bioinformatics, 2007, 23(24): 3280-3288
CrossRef Google scholar
[10]
Lu C, Devos A, Suykens J A K, Arus C, Van Huffel S. Bagging linear sparse bayesian learning models for variable selection in cancer diagnosis. IEEE Transactions on Information Technology in Biomedicine, 2007, 11(3): 338-347
CrossRef Google scholar
[11]
Larios N, Deng H, Zhang W, Sarpola M, Yuen J, Paasch R, Moldenke A, Lytle D A, Ruiz-Correa S, Mortensen E N, Shapiro L G, Dietterich T G. Automated insect identification through concatenated histograms of local appearance features: feature vector generation and region detection for deformable objects. Machine Vision and Applications, 2008, 19(2): 105-123
CrossRef Google scholar
[12]
Stepinski T F, Ghosh S, Vilalta R. Machine learning for automatic mapping of planetary surfaces. In: Proceedings of the 19th National Conference on Innovative Applications of Artificial Intelligence. 2007, 1807-1812
[13]
Wu F, Weld D S. Autonomously semantifying wikipedia. In: Proceedings of the 16th ACM Conference on Information and Knowledge Management. 2007, 41-50
[14]
Singh K, İpek E, McKee S A, de Supinski B R, Schulz M, Caruana R. Predicting parallel application performance via machine learning approaches. Concurrency and Computation: Practice and Experience, 2007, 19(17): 2219-2235
CrossRef Google scholar
[15]
Braga P L, Oliveira A L I, Ribeiro G H T, Meira S R L. Bagging predictors for estimation of software project effort. In: Proceedings of International Joint Conference on Neural Networks. 2007, 1595-1600
CrossRef Google scholar
[16]
Aljamaan H I, Elish M O. An empirical study of bagging and boosting ensembles for identifying faulty classes in object-oriented software. In: Proceedings of IEEE Symposium on Computational Intelligence and Data Mining. 2009, 187-194
CrossRef Google scholar
[17]
Hulth A. Improved automatic keyword extraction given more linguistic knowledge. In: Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing. 2003, 216-223
CrossRef Google scholar
[18]
Kurogi S, Nedachi N, Funatsu Y. Reproduction and recognition of vowel signals using single and bagging competitive associative nets. In: Proceedings of the 14th International Conference on Neural Information Processing, Part II. 2008, 40-49
[19]
Kurogi S, Sato S, Ichimaru K. Speaker recognition using pole distribution of speech signals obtained by bagging CAN2. In: Proceedings of the 16th International Conference on Neural Information Processing, Part I. 2009, 622-629
[20]
Boinee P, De Angelis A, Foresti G L. Ensembling classifiers-an application to image data classification from Cherenkov telescope experiment. In: Proceedings of International Enformatika Conference. 2005, 394-398
[21]
Wang Y, Wang Y, Jain A K, Tan T. Face verification based on bagging RBF networks. In: Proceedings of International Conference on Advances in Biometrics. 2006, 69-77
[22]
Quinlan J R. Bagging, boosting, and c4.5. In: Proceedings of the 13th National Conference on Artificial Intelligence. 1996, 725-730
[23]
Maclin R, Opitz D. An empirical evaluation of bagging and boosting. In: Proceedings of the 14th National Conference on Artificial Intelligence and the 9th Conference on Innovative Applications of Artificial Intelligence. 1997, 546-551
[24]
Opitz D W, Maclin R. Popular ensemble methods: an empirical study. Journal of Artificial Intelligence Research, 1999, 11: 169-198
[25]
Kotsiantis S B, Pintelas P E. Combining bagging and boosting. International Journal of Computational Intelligence, 2004, 1(4): 324-333
[26]
Banfield R E, Hall L O, Bowyer K W, Kegelmeyer W P. A comparison of decision tree ensemble creation techniques. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(1): 173-180
CrossRef Google scholar
[27]
Tumer K, Ghosh J. Error correlation and error reduction in ensemble classifiers. Connection Science, 1996, 8(3): 385-404
CrossRef Google scholar
[28]
Breiman L. Random forests. Machine learning, 2001, 45(1): 5-32
CrossRef Google scholar
[29]
Tan P N, Steinbach M, Kumar V. Introduction to Data Mining. Boston: Addison Wesley, 2005
[30]
Ting K M, Wells J R, Tan S C, Teng S W, Webb G I. Featuresubspace aggregating: ensembles for stable and unstable learners. Machine Learning, 2011, 82(3): 375-397
CrossRef Google scholar
[31]
Wang Q, Zhang L. Ensemble learning based on multi-task class labels. In: Proceedings of the 14th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. 2010, 464-475
[32]
Tikk D, Kardkovács Z T, Szidarovszky F P. Voting with a parameterized veto strategy: solving the KDD cup 2006 problem by means of a classifier committee. ACM SIGKDD Explorations Newsletter, 2006, 8(2): 53-62
CrossRef Google scholar
[33]
Lo H Y, Chang K W, Chen S T, Chiang T H, Ferng C S, Hsieh C J, Ko Y K, Kuo T T, Lai H C, Lin K Y, Wang C H, Yu H F, Lin C J, Lin H T, Lin S D. An ensemble of three classifiers for KDD cup 2009: expanded linear model, heterogeneous boosting, and selective naive Bayes. Journal of Machine Learning Research- Proceedings Track, 2009, 7: 57-64
[34]
Niculescu-Mizil A, Perlich C, Swirszcz G, Sindhwani V, Liu Y, Melville P, Wang D, Xiao J, Hu J, Singh M, Shang W X, Zhu Y F. Winning the KDD cup orange challenge with ensemble selection. Journal of Machine Learning Research- Proceedings Track, 2009, 7: 23-34
[35]
Xie J, Rojkova V, Pal S, Coggeshall S. A combination of boosting and bagging for KDD cup 2009-fast scoring on a large database. Journal of Machine Learning Research- Proceedings Track, 2009, 7: 35-43
[36]
Yu H F, Lo H Y, Hsieh H P, Lou J K, McKenzie T G, Chou J W, Chung P H, Ho C H, Chang C F, Wei Y H, Weng J Y, Yan E S, Chang C W, Kuo T T, Lo Y C, Chang P T, Po C, Wang C Y, Huang Y H, Hung C W, Ruan Y X, Lin Y S, Lin S D, Lin H T, Lin C J. Feature engineering and classifier ensemble for KDD cup 2010. Journal of Machine Learning Research: Workshop and Conference Proceedings, 2010, 1: 1-16
[37]
Hsu K W, Srivastava J. Improving bagging performance through multialgorithm ensembles. In: Proceedings of the 1st Doctoral Symposium on Data Mining, in conjunction with the 15th Pacific-Asia Conference on Knowledge Discovery and Data Mining. 2011
[38]
Hsu K W, Srivastava J. Diversity in combinations of heterogeneous classifiers. In: Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. 2009, 923-932
[39]
Hsu K W, Srivastava J. An empirical study of applying ensembles of heterogeneous classifiers on imperfect data. In: Proceedings of PAKDD 2009 International Workshops on New Frontiers in Applied Data Mining. 2010, 28-39
[40]
Hsu K W, Srivastava J. Relationship between diversity and correlation in multi-classifier systems. In: Proceedings of the 14th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. 2010, 500-506
[41]
Hsu K W. Applying bagging with heterogeneous algorithms to health care data. In: Proceedings of the 1st Workshop on Data Mining for Healthcare Management, in conjunction with the 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining. 2010
[42]
Meynet J. Information theoretic combination of classifiers with application to face detection. Dissertation for the Doctoral Degree. Lausanne: EPFL, 2007
[43]
Kuncheva L I, Whitaker C J. Ten measures of diversity in classifier ensembles: limits for two classifiers. In: Proceedings of IEEEWorkshop on Intelligent Sensor Processing. 2001, 1-10
[44]
Aksela M. Comparison of classifier selection methods for improving committee performance. In: Proceedings of the 4th International Conference on Multiple Classifier Systems. 2003, 84-93
CrossRef Google scholar
[45]
Banfield R E, Hall L O, Bowyer K W, Kegelmeyer W P. A new ensemble diversity measure applied to thinning ensembles. In: Proceedings of the 4th International Conference on Multiple Classifier Systems. 2003, 306-316
CrossRef Google scholar
[46]
Kuncheva L I, Whitaker C J. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine Learning, 2003, 51(2): 181-207
CrossRef Google scholar
[47]
Kuncheva L I. That elusive diversity in classifier ensembles. In: Proceedings of the 1st Iberian Conference on Pattern Recognition and Image Analysis. 2003, 1126-1138
[48]
Kuncheva L I. Combining Pattern Classifiers: Methods and Algorithms. New York: Wiley-Interscience, 2004
CrossRef Google scholar
[49]
Brown G, Wyatt J L, Harris R, Yao X. Diversity creation methods: a survey and categorisation. Information Fusion, 2005, 6(1): 5-20
CrossRef Google scholar
[50]
Liu W, Wu Z, Pan G. An entropy based diversity measure for classifier combining and its application to face classifier ensemble thinning. In: Proceedings of the 5th Chinese Conference on Biometric Recognition. 2004, 118-124
[51]
Aksela M, Laaksonen J. Using diversity of errors for selecting members of a committee classifier. Pattern Recognition, 2006, 39(4): 608-623
CrossRef Google scholar
[52]
Fan T G, Zhu Y, Chen J M. A new measure of classifier diversity in multiple classifier system. In: Proceedings of International Conference on Machine Learning and Cybernetics. 2008, 18-21
[53]
Ghosh J. Multiclassifier systems: back to the future. In: Proceedings of the 3rd International Workshop on Multiple Classifier Systems. 2003, 1-15
[54]
Brown G. Ensemble learning. Encyclopedia of Machine Learning. Heidelberg: Springer Press, 2010
[55]
Bühlmann P, Yu B. Analyzing bagging. Annals of Statistics, 2003, 30(4): 927-961
[56]
Davison A C, Hinkley D V. Bootstrap Methods and Their Application. Cambridge: Cambridge University Press, 1997
[57]
Frank A, Asuncion A. UCI machine learning repository. 2010, http://archive.ics.uci.edu/ml
[58]
Wu X, Kumar V, Quinlan J R, Ghosh J, Yang Q, Motoda H, McLachlan G J, Ng A, Liu B, Yu P S, Zhou Z H, Steinbach M, Hand D J, Steinberg D. Top 10 algorithms in data mining. Knowledge and Information Systems, 2008, 14(1): 1-37
CrossRef Google scholar
[59]
Platt J C. Fast training of support vector machines using sequential minimal optimization. In: Schölkopf B, Burges C J C, Smola A J, eds. Advances in Kernel Methods - Support Vector Learning. Cambridge: MIT Press, 1998
[60]
Schapire R E. The strength of weak learnability. Machine Learning, 1990, 5(2): 197-227
CrossRef Google scholar
[61]
Melville P, Mooney R J. Constructing diverse classifier ensembles using artificial training examples. In: Proceedings of the 18th International Joint Conference on Artificial Intelligence. 2003, 505-510
[62]
Melville P, Mooney R J. Creating diversity in ensembles using artificial data. Information Fusion, 2004, 6(1): 99-111
CrossRef Google scholar
[63]
Wolpert D H. Stacked generalization. Neural Networks, 1992, 5(2): 241-259
CrossRef Google scholar
[64]
Seewald A K. How to make stacking better and faster while also taking care of an unknown weakness. In: Proceedings of the 19th International Conference on Machine Learning. 2003, 554-561
[65]
Rodriguez J J, Kuncheva L I, Alonso C J. Rotation forest: a new classifier ensemble method. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2006, 28(10): 1619-1630
CrossRef Google scholar
[66]
Kuncheva L I, Rodriguez J J. An experimental study on rotation forest ensembles. In: Proceedings of the 7th International Conference on Multiple Classifier Systems. 2007, 459-468
CrossRef Google scholar
[67]
Ho T K. The random subspace method for constructing decision forests. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(8): 832-844
CrossRef Google scholar
[68]
Quinlan J R. Learning with continuous classes. In: Proceedings of the 5th Australian Joint Conference on Artificial Intelligence. 1992, 343-348
[69]
Quinlan J R. C4.5: Programs for Machine Learning. San Francisco: Morgan Kaufmann, 1993
[70]
Ting K M, Witten I H. Stacking bagged and dagged models. In: Proceedings of the 14th International Conference on Machine Learning. 1997, 367-375

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(692 KB)

Accesses

Citations

Detail

Sections
Recommended

/