Combined classifier for cross-project defect prediction: an extended empirical study

Yun ZHANG , David LO , Xin XIA , Jianling SUN

Front. Comput. Sci. ›› 2018, Vol. 12 ›› Issue (2) : 280 -296.

PDF (564KB)
Front. Comput. Sci. ›› 2018, Vol. 12 ›› Issue (2) : 280 -296. DOI: 10.1007/s11704-017-6015-y
RESEARCH ARTICLE

Combined classifier for cross-project defect prediction: an extended empirical study

Author information +
History +
PDF (564KB)

Abstract

To facilitate developers in effective allocation of their testing and debugging efforts, many software defect prediction techniques have been proposed in the literature. These techniques can be used to predict classes that are more likely to be buggy based on the past history of classes, methods, or certain other code elements. These techniques are effective provided that a sufficient amount of data is available to train a prediction model. However, sufficient training data are rarely available for new software projects. To resolve this problem, cross-project defect prediction, which transfers a prediction model trained using data from one project to another, was proposed and is regarded as a new challenge in the area of defect prediction. Thus far, only a few cross-project defect prediction techniques have been proposed. To advance the state of the art, in this study, we investigated seven composite algorithms that integrate multiple machine learning classifiers to improve cross-project defect prediction. To evaluate the performance of the composite algorithms, we performed experiments on 10 open-source software systems from the PROMISE repository, which contain a total of 5,305 instances labeled as defective or clean. We compared the composite algorithms with the combined defect predictor where logistic regression is used as the meta classification algorithm (CODEPLogistic), which is the most recent cross-project defect prediction algorithm in terms of two standard evaluation metrics: cost effectiveness and F-measure. Our experimental results show that several algorithms outperform CODEPLogistic: Maximum voting shows the best performance in terms of F-measure and its average F-measure is superior to that of CODEPLogistic by 36.88%. Bootstrap aggregation (BaggingJ48) shows the best performance in terms of cost effectiveness and its average cost effectiveness is superior to that of CODEPLogistic by 15.34%.

Keywords

defect prediction / cross-project / classifier combination

Cite this article

Download citation ▾
Yun ZHANG, David LO, Xin XIA, Jianling SUN. Combined classifier for cross-project defect prediction: an extended empirical study. Front. Comput. Sci., 2018, 12(2): 280-296 DOI:10.1007/s11704-017-6015-y

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Zimmermann T, Nagappan N. Predicting defects using network analysis on dependency graphs. In: Proceedings of the 30th International Conference on Software Engineering. 2008, 531–540

[2]

Menzies T, Milton Z, Turhan B, Cukic B, Jiang Y, Bener A. Defect prediction from static code features: current results, limitations, new approaches. Automated Software Engineering, 2010, 17(4): 375–407

[3]

Jiang T, Tan L, Kim S. Personalized defect prediction. In: Proceedings of the 28th IEEE International Conference on Automated Software Engineering. 2013, 279–289

[4]

Lessmann S, Baesens B, Mues C, Pietsch S. Benchmarking classification models for software defect prediction: a proposed framework and novel findings. IEEE Transactions on Software Engineering, 2008, 34(4): 485–496

[5]

Liu Y, Khoshgoftaar T M, Seliya N. Evolutionary optimization of software quality modeling with multiple repositories. IEEE Transactions on Software Engineering, 2010, 36(6): 852–864

[6]

D’Ambros M, Lanza M, Robbes R. Evaluating defect prediction approaches: a benchmark and an extensive comparison. Empirical Software Engineering, 2012, 17(4–5): 531–577

[7]

Nagappan N, Ball T, Zeller A. Mining metrics to predict component failures. In: Proceedings of the 28th International Conference on Software Engineering. 2006, 452–461

[8]

Kitchenham B A, Mendes E, Travassos G H. Cross versus withincompany cost estimation studies: a systematic review. IEEE Transactions on Software Engineering, 2007, 33(5): 316–329

[9]

Zimmermann T, Nagappan N, Gall H, Giger E, Murphy B. Crossproject defect prediction: a large scale experiment on data vs. domain vs. process. In: Proceedings of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering. 2009, 91–100

[10]

Turhan B, Menzies T, Bener A B, Di Stefano J. On the relative value of cross-company and within-company data for defect prediction. Empirical Software Engineering, 2009, 14(5): 540–578

[11]

Panichella A, Oliveto R, De Lucia A. Cross-project defect prediction models: L’union fait la force. In: Proceedings of Software Evolution Week-IEEE Conference on Software Maintenance, Reengineering and Reverse Engineering. 2014, 164–173

[12]

Arisholm E, Briand L C, Fuglerud M. Data mining techniques for building fault-proneness models in telecom java software. In: Proceedings of the 18th IEEE International Symposium on Software Reliability. 2007, 215–224

[13]

Rahman F, Devanbu P. How, and why, process metrics are better. In: Proceedings of the International Conference on Software Engineering. 2013, 432–441

[14]

Rahman F, Posnett D, Devanbu P. Recalling the imprecision of crossproject defect prediction. In: Proceedings of the 20th ACM SIGSOFT International Symposium on the Foundations of Software Engineering. 2012, 61

[15]

Rahman F, Posnett D, Herraiz I, Devanbu P. Sample size vs. bias in defect prediction. In: Proceedings of the 9th Joint Meeting on Foundations of Software Engineering. 2013, 147–157

[16]

Peters F, Menzies T, Marcus A. Better cross company defect prediction. In: Proceedings of the 10th IEEEWorking Conference on Mining Software Repositories. 2013, 409–418

[17]

Nam J, Pan S J, Kim S. Transfer defect learning. In: Proceedings of the International Conference on Software Engineering. 2013, 382–391

[18]

Canfora G, De Lucia A, Di Penta M, Oliveto R, Panichella A, Panichella S. Multi-objective cross-project defect prediction. In: Proceedings of the 6th IEEE International Conference on Software Testing, Verification and Validation. 2013, 252–261

[19]

Baeza-Yates R, Ribeiro-Neto B. Modern Information Retrieval. Vol 463. New York: ACM Press, 1999

[20]

Schütze H. Introduction to information retrieval. In: Proceedings of the International Communication of Association for Computing Machinery Conference. 2008

[21]

Zhang Y, Lo D, Xia X, Sun J. An empirical study of classifier combination for cross-project defect prediction. In: Proceedings of the 39th IEEE Annual Computer Software and Applications Conference. 2015, 264–269

[22]

Chidamber S R, Kemerer C F. A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 1994, 20(6): 476–493

[23]

Henderson-Sellers B. Object-Oriented Metrics, Measures of Complexity. Upper Saddle River, NJ: Prentice Hall, 1996

[24]

Bansiya J, Davis C G. A hierarchical model for object-oriented design quality assessment. IEEE Transactions on Software Engineering, 2002, 28(1): 4–17

[25]

Tang M, Kao M, Chen M. An empirical study on object-oriented metrics. In: Proceedings of the 6th International Symposium on Software Metrics. 1999, 242–249

[26]

Martin R. OO design quality metrics — an analysis of dependencies. IEEE Transactions on Software Engineering, 1994, 20(6): 476–493

[27]

McCabe T. A complexity measure. IEEE Transactions on Software Engineering, 1976, 2(4): 308–320

[28]

Jureczko M, Madeyski L. Towards identifying software project clusters with regard to defect prediction. In: Proceedings of the 6th International Conference on Predictive Models in Software Engineering. 2010

[29]

Basili V R, Briand L C, Melo W L. A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering, 1996, 22(10): 751–761

[30]

Gyimothy T, Ferenc R, Siket I. Empirical validation of object-oriented metrics on open source software for fault prediction. IEEE Transactions on Software Engineering, 2005, 31(10): 897–910

[31]

Bezerra M E, Oliveira A L, Meira S R. A constructive rbf neural network for estimating the probability of defects in software modules. In: Proceedings of the International Joint Conference on Neural Networks. 2007, 2869–2874

[32]

Ceylan E, Kutlubay F O, Bener A B. Software defect identification using machine learning techniques. In: Proceedings of the 32nd EUROMICRO Conference on Software Engineering and Advanced Applications. 2006, 240–247

[33]

Okutan A, Yıldız O T. Software defect prediction using bayesian networks. Empirical Software Engineering, 2014, 19(1): 154–181

[34]

Nelson A, Menzies T, Gay G. Sharing experiments using open-source software. Software: Practice and Experience, 2011, 41(3): 283–305

[35]

Han J, Kamber M. Data Mining: Concepts and Techniques. 2nd ed . San Francisco: Morgan Kaufmann, 2006

[36]

Koller D, Friedman N. Probabilistic Graphical Models: Principles and Techniques. Cambridge: MIT Press, 2009

[37]

Buhmann M D. Radial basis functions. Acta Numerica, 2000, 9: 1–38

[38]

Quinlan J R. Bagging, boosting, and c4. 5. In: Proceedings of the 13th National Conference on Artificial Intelligence and Eighth Innovative Applications of Artificial Intelligence Conference. 1996, 725–730

[39]

Breiman L. Random forests. Machine Learning, 2001, 45(1): 5–32

[40]

Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten I H. The weka data mining software: an update. ACM SIGKDD Explorations Newsletter, 2009, 11(1): 10–18

[41]

Wu R, Zhang H, Kim S, Cheung S C. Relink: recovering links between bugs and changes. In: Proceedings of the ACM Sigsoft Symposium and the European Conference on Foundations of Software Engineering. 2011, 15–25

[42]

Tian Y, Lawall J, Lo D. Identifying linux bug fixing patches. In: Proceedings of the 34th International Conference on Software Engineering. 2012, 386–396

[43]

Rao S, Kak A. Retrieval from software libraries for bug localization: a comparative study of generic and composite text models. In: Proceedings of the 8th Working Conference on Mining Software Repositories. 2011, 43–52

[44]

Kim S, Whitehead E J, Zhang Y. Classifying software changes: Clean or buggy? IEEE Transactions on Software Engineering, 2008, 34(2): 181–196

[45]

Chidamber S R, Kemerer C F. A metrics suite for object oriented design. IEEE Transactions on Software Engineering, 1994, 20(6): 476–493

[46]

Moser R, Pedrycz W, Succi G. A comparative analysis of the efficiency of change metrics and static code attributes for defect prediction. In: Proceedings of the 30th ACM/IEEE International Conference on Software Engineering. 2008, 181–190

[47]

D’Ambros M, Lanza M, Robbes R. An extensive comparison of bug prediction approaches. In: Proceedings of the 7th IEEE Working Conference on Mining Software Repositories. 2010, 31–41

[48]

Ma Y, Luo G, Zeng X, Chen A. Transfer learning for cross-company software defect prediction. Information and Software Technology, 2012, 54(3): 248–256

[49]

Xia X, Lo D, Wang X, Yang X. Collective personalized change classification with multiobjective search. IEEE Transactions on Reliability, 2016, 65: 1–20

[50]

Yang X, Lo D, Xia X, Zhang Y. Deep learning for just-in-time defect prediction. In: Proceedings of the IEEE International Conference on Software Quality, Reliability and Security. 2015, 17–26

[51]

Xuan X, Lo D, Xia X, Tian Y. Evaluating defect prediction approaches using a massive set of metrics: an empirical study. In: Proceedings of the 30th Annual ACM Symposium on Applied Computing. 2015, 1644–1647

[52]

Nelson A, Menzies T, Gay G. Sharing experiments using open-source software. Software: Practice and Experience, 2011, 41(3): 283–305

[53]

Xia X, Lo D, Pan S J, Nagappan N, Wang X. Hydra: massively compositional model for cross-project defect prediction. IEEE Transactions on Software Engineering, 2016, 42: 977–998

RIGHTS & PERMISSIONS

Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature

AI Summary AI Mindmap
PDF (564KB)

Supplementary files

Supplementary Material

1189

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/