Learning multiple metrics for ranking

Xiubo GENG, Xue-Qi CHENG

PDF(174 KB)
PDF(174 KB)
Front. Comput. Sci. ›› 2011, Vol. 5 ›› Issue (3) : 259-267. DOI: 10.1007/s11704-011-0152-5
RESEARCH ARTICLE

Learning multiple metrics for ranking

Author information +
History +

Abstract

Directly optimizing an information retrieval (IR) metric has become a hot topic in the field of learning to rank. Conventional wisdom believes that it is better to train for the loss function on which will be used for evaluation. But we often observe different results in reality. For example, directly optimizing averaged precision achieves higher performance than directly optimizing precision@3 when the ranking results are evaluated in terms of precision@3. This motivates us to combine multiple metrics in the process of optimizing IR metrics. For simplicity we study learning with two metrics. Since we usually conduct the learning process in a restricted hypothesis space, e.g., linear hypothesis space, it is usually difficult to maximize both metrics at the same time. To tackle this problem, we propose a relaxed approach in this paper. Specifically, we incorporate one metric within the constraint while maximizing the other one. By restricting the feasible hypothesis space, we can get a more robust ranking model. Empirical results on the benchmark data set LETOR show that the relaxed approach is superior to the direct linear combination approach, and also outperforms other baselines.

Keywords

learning to rank / multiple measures / direct optimization

Cite this article

Download citation ▾
Xiubo GENG, Xue-Qi CHENG. Learning multiple metrics for ranking. Front Comput Sci Chin, 2011, 5(3): 259‒267 https://doi.org/10.1007/s11704-011-0152-5

References

[1]
Burges C, Shaked T, Renshaw E, Lazier A, Deeds M, Hamilton N, Hullender G. Learning to rank using gradient descent. In: Proceedings of 22nd International Conference on Machine learning. 2005, 89–96
[2]
Freund Y, Iyer R, Schapire R E, Singer Y. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 2003, 4: 933–969
CrossRef Google scholar
[3]
Joachims T. Optimizing search engines using clickthrough data. In: Proceedings of 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.2002, 133–142
[4]
Cao Z, Qin T, Liu T Y, Tsai M F, Li H. Learning to rank: from pairwise approach to listwise approach. In: Proceedings of 24th International Conference on Machine Learning. 2007, 129–136
CrossRef Google scholar
[5]
Xia F, Liu T Y, Wang J, Zhang W, Li H. Listwise approach to learning to rank: theory and algorithm. In: Proceedings of 25th International Conference on Machine Learning. 2008, 1192–1199
CrossRef Google scholar
[6]
Robertson S. On the optimisation of evaluation metrics. In: Proceedings of SIGIR 2008 Workshop on Learning to Rank. 2008
[7]
Chakrabarti S, Khanna R, Sawant U, Bhattacharyya C. Structured learning for non-smooth ranking losses. In: Proceeding of 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2008, 88–96
CrossRef Google scholar
[8]
Caruana R. Multitask learning. Machine Learning, 1997, 28(1): 41–75
CrossRef Google scholar
[9]
Baxter J. A model of inductive bias learning. Journal of Artificial Intelligence Research, 2000, 12: 149–198
[10]
Xu J, Liu T Y, Lu M, Li H, Ma W Y. Directly optimizing evaluation measures in learning to rank. In: Proceedings of 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2008, 107–114
CrossRef Google scholar
[11]
Taylor M, Guiver J, Robertson S, Minka T. Softrank: optimizing non-smooth rank metrics. In: Proceedings of 1st International Conference on Web Search and Web Data Mining. 2008, 77–86
CrossRef Google scholar
[12]
Qin T, Liu T Y, Li H. A general approximation framework for direct optimization of information retrieval measures. Technical Report MSR-TR-2008-164, Microsoft Corporation, 2008
[13]
Yue Y, Finley T, Radlinski F, Joachims T. A support vector method for optimizing average precision. In: Proceedings of 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007, 271–278
CrossRef Google scholar
[14]
Xu J, Li H. Adarank: a boosting algorithm for information retrieval. In: Proceedings of 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007, 391–398
CrossRef Google scholar
[15]
Liu T Y, Xu J, Qin T, Xiong W, Li H. LETOR: benchmark dataset for research on learning to rank for information retrieval. In: Proceedings of the Learning to Rank Workshop in the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007

RIGHTS & PERMISSIONS

2014 Higher Education Press and Springer-Verlag Berlin Heidelberg
AI Summary AI Mindmap
PDF(174 KB)

Accesses

Citations

Detail

Sections
Recommended

/