Learning random forests for ranking

Liangxiao JIANG

PDF(113 KB)
PDF(113 KB)
Front. Comput. Sci. ›› 2011, Vol. 5 ›› Issue (1) : 79-86. DOI: 10.1007/s11704-010-0388-5
RESEARCH ARTICLE

Learning random forests for ranking

Author information +
History +

Abstract

The random forests (RF) algorithm, which combines the predictions from an ensemble of random trees, has achieved significant improvements in terms of classification accuracy. In many real-world applications, however, ranking is often required in order to make optimal decisions. Thus, we focus our attention on the ranking performance of RF in this paper. Our experimental results based on the entire 36 UC Irvine Machine Learning Repository (UCI) data sets published on the main website of Weka platform show that RF doesn’t perform well in ranking, and is even about the same as a single C4.4 tree. This fact raises the question of whether several improvements to RF can scale up its ranking performance. To answer this question, we single out an improved random forests (IRF) algorithm. Instead of the information gain measure and the maximum-likelihood estimate, the average gain measure and the similarity-weighted estimate are used in IRF. Our experiments show that IRF significantly outperforms all the other algorithms used to compare in terms of ranking while maintains the high classification accuracy characterizing RF.

Keywords

random forests (RF) / decision tree / random selection / class probability estimation / ranking / the area under the receiver operating characteristics curve (AUC)

Cite this article

Download citation ▾
Liangxiao JIANG. Learning random forests for ranking. Front Comput Sci Chin, 2011, 5(1): 79‒86 https://doi.org/10.1007/s11704-010-0388-5

References

[1]
Provost F, Domingos P. Tree induction for probability-based ranking. Machine Learning, 2003, 52(3): 199-215
CrossRef Google scholar
[2]
Ling C X, Yan R J. Decision tree with better ranking. In: Proceedings of 20th International Conference on Machine Learning. 2003, 480-487
[3]
Jiang L X, Li C QCaiZ H. Learning decision tree for ranking. Knowledge and Information Systems, 2009, 20(1): 123-135
CrossRef Google scholar
[4]
Jiang L X, Wang D H, Zhang H, Cai Z H, Huang B. Using instance cloning to improve naive Bayes for ranking. International Journal of Pattern Recognition and Artificial Intelligence, 2008, 22(6): 1121-1140
CrossRef Google scholar
[5]
Bradley A P. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern Recognition, 1997, 30(7): 1145-1159
CrossRef Google scholar
[6]
Hand D J, Till R J. A simple generalisation of the area under the roc curve for multiple class classification problems. Machine Learning, 2001, 45(2): 171-186
CrossRef Google scholar
[7]
Ling C X, Huang J, Zhang H. Auc: a statistically consistent and more discriminating measure than accuracy. In: Proceedings of 18th International Joint Conference on Artificial Intelligence. 2003, 519-526
[8]
Quinlan J R. C4.5: Programs for Machine Learning. San Francisco: Morgan Kaufmann, 1992
[9]
Mitchell T M. Machine Learning. New York: McGraw-Hill, 1997
[10]
Breiman L. Random forests. Machine Learning, 2001, 45(1): 5-32
CrossRef Google scholar
[11]
Dietterich T G. An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting and randomization. Machine Learning, 2000, 40(2): 139-157
CrossRef Google scholar
[12]
Breiman L. Bagging Predictors. Machine Learning, 1996, 24(2): 123-140
CrossRef Google scholar
[13]
Bauer E, Kohavi R. An empirical comparison of voting classification algorithms: bagging, boosting and variants. Machine Learning, 1999, 36(1-2): 105-139
CrossRef Google scholar
[14]
Witten I H, Frank E. Data Mining: Practical Machine Learning Tools and Techniques. 2nd ed. San Francisco: Morgan Kaufmann, 2005
[15]
Quinlan J R. Induction of decision trees. Machine Learning, 1986, 1(1): 81-106
CrossRef Google scholar
[16]
Wang D H, Jiang L X. An improved attribute selection measure for decision tree induction. In: Proceedings of 4th International Conference on Fuzzy Systems and Knowledge Discovery. 2007, 654-658
CrossRef Google scholar
[17]
De Mántaras R L. A distance-based attribute selection measure for decision tree induction. Machine Learning, 1991, 6(1): 81-92
CrossRef Google scholar
[18]
Pazzani M J, Merz C J, Murphy P M, Ali K. Hume T, Brunk C. Reducing misclassification costs. In: Proceedings of 11th International Conference on Machine Learning. 1994, 217-225
[19]
Bradford J P, Kunz C, Kohavi R, Brunk C, Brodley C E. Pruning decision trees with misclassification costs. In: Proceedings of 10th European Conference on Machine Learning. 1998, 131-136
[20]
Provost F J, Fawcett T, Kohavi R. The case against accuracy estimation for comparing induction algorithms. In: Proceedings of 15th International Conference on Machine Learning. 1998, 445-453
[21]
Jiang L X, Wang D H, Cai Z H. Scaling up the accuracy of bayesian network classifiers by m-estimate. In: Proceedings of 3rd International Conference on Intelligent Computing. 2007, 475-484
[22]
Smyth P, Gray A, Fayyad U M. Retrofitting decision tree classifiers using kernel density estimation. In: Proceedings of 12th International Conference on Machine Learning. 1995, 506-514
[23]
Nadeau C, Bengio Y. Inference for the generalization error. Machine Learning, 2003, 52(3): 239-281
CrossRef Google scholar

Acknowledgements

We thank anonymous reviewers for their valuable comments and suggestions. The work was supported by the National Natural Science Foundation of China (Grant No. 60905033), the National High Technology Research and Development Program of China (863 Program) (2009AA12Z117), the Provincial Natural Science Foundation of Hubei (2009CDB139), and the Fundamental Research Funds for the Central Universities (CUG090109).

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(113 KB)

Accesses

Citations

Detail

Sections
Recommended

/