Towards making co-training suffer less from insufficient views

Xiangyu GUO, Wei WANG

PDF(728 KB)
PDF(728 KB)
Front. Comput. Sci. ›› 2019, Vol. 13 ›› Issue (1) : 99-105. DOI: 10.1007/s11704-018-7138-5
RESEARCH ARTICLE

Towards making co-training suffer less from insufficient views

Author information +
History +

Abstract

Co-training is a famous semi-supervised learning algorithm which can exploit unlabeled data to improve learning performance. Generally it works under a two-view setting (the input examples have two disjoint feature sets in nature), with the assumption that each view is sufficient to predict the label. However, in real-world applications due to feature corruption or feature noise, both views may be insufficient and co-training will suffer from these insufficient views. In this paper, we propose a novel algorithm named Weighted Co-training to deal with this problem. It identifies the newly labeled examples that are probably harmful for the other view, and decreases their weights in the training set to avoid the risk. The experimental results show that Weighted Co-training performs better than the state-of-art co-training algorithms on several benchmarks.

Keywords

semi-supervised learning / co-training / insufficient views

Cite this article

Download citation ▾
Xiangyu GUO, Wei WANG. Towards making co-training suffer less from insufficient views. Front. Comput. Sci., 2019, 13(1): 99‒105 https://doi.org/10.1007/s11704-018-7138-5

References

[1]
Miller D J, Uyar H S. A mixture of experts classifier with learning based on both labelled and unlabelled data. Advances in Neural Information Processing Systems, 1997, 571–577
[2]
Nigam K, McCallum A, Thrun S, Mitchell T. Text classification from labeled and unlabeled documents using EM. Machine Learning, 2000, 39(2/3): 103–134
CrossRef Google scholar
[3]
Bennett K P, Demiriz A. Semi-supervised support vector machines. Advances in Neural Information Processing Systems, 1998, 368–374
[4]
Joachims T. Transductive inference for text classification using support vector machines. In: Proceedings of the 16th International Conference on Machine Learning. 1999, 200–209
[5]
Blum A, Chawla S. Learning from labeled and unlabeled data using graph mincuts. In: Proceedings of the 18th International Conference on Machine Learning. 2001, 19–26
[6]
Zhu X, Ghahramani Z, Lafferty J. Semi-supervised learning using gaussian fields and harmonic functions. In: Proceedings of the 20th International Conference on Machine Learning. 2003, 912–919
[7]
Zhou D, Bousquet O, Lal T N, Weston J, Schölkopf B. Learning with local and global consistency. Advances in Neural Information Processing Systems, 2003, 321–328
[8]
Blum A, Mitchell T. Combining labeled and unlabeled data with cotraining. In: Proceedings of the 11th Annual Conference on Computational Learning Theory. 1998, 92–100
[9]
Zhou Z H, Li M. Tri-training: exploiting unlabeled data using three classifiers. IEEE Transactions on Knowledge and Data Engineering, 2005, 17(11): 1529–1541
CrossRef Google scholar
[10]
Zhou Z H, Li M. Semi-supervised learning by disagreement. Knowledge and Information System, 2010, 24(3): 415–439
CrossRef Google scholar
[11]
Nigam K, Ghani R. Analyzing the effectiveness and applicability of co-training. In: Proceedings of the 10th International Conference on Information and Knowledge Management. 2000, 86–93
CrossRef Google scholar
[12]
Goldman S A, Zhou Y. Enhancing supervised learning with unlabeled data. In: Proceedings of the 17th International Conference on Machine Learning. 2000, 327–334
[13]
Kiritchenko S, Matwin S. Email classification with co-training. In: Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research. 2001, 301–312
[14]
Maeireizo B, Litman D, Hwa R. Co-training for predicting emotions with spoken dialogue data. In: Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions. 2004, 28
CrossRef Google scholar
[15]
Wan X. Co-training for cross-lingual sentiment classification. In: Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the AFNLP. 2009, 235–243
CrossRef Google scholar
[16]
Liu R, Cheng J, Lu H. A robust boosting tracker with minimum error bound in a co-training framework. In: Proceedings of the 12th IEEE International Conference on Computer Vision. 2009, 1459–1466
[17]
Abney S P. Bootstrapping. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 2002, 360–367
[18]
Balcan M F, Blum A, Yang K. Co-training and expansion: towards bridging theory and practice. Advances in Neural Information Processing Systems, 2004, 89–96
[19]
Wang W, Zhou Z H. A new analysis of co-training. In: Proceedings of the 27th International Conference on Machine Learning. 2010, 1135–1142
[20]
Wang W, Zhou Z H. Analyzing co-training style algorithms. In: Proceedings of the 18th European Conference on Machine Learning. 2007, 454–465
CrossRef Google scholar
[21]
Wang W, Zhou Z H. Co-training with insufficient views. In: Proceedings of the 5th Asian Conference on Machine Learning. 2013, 467–482
[22]
Xu J, He H, Man H. DCPE co-training for classification. Neurocomputing, 2012, 86: 75–85
CrossRef Google scholar
[23]
Kushmerick N. Learning to remove internet advertisements. In: Proceedings of the 3rd Annual Conference on Autonomous Agents. 1999, 175–181
CrossRef Google scholar
[24]
Giles C L, Bollacker K D, Lawrence S. Citeseer: an automatic citation indexing system. In: Proceedings of the 3rd ACM International Conference on Digital Libraries. 1998, 89–98
CrossRef Google scholar
[25]
Bisson G, Grimal C. Co-clustering of multi-view datasets: a parallelizable approach. In: Proceedings of the 12th IEEE International Conference on Data Mining. 2012, 828–833
CrossRef Google scholar
[26]
Lichman M. UCI machine learning repository. 2013

RIGHTS & PERMISSIONS

2018 Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature
AI Summary AI Mindmap
PDF(728 KB)

Accesses

Citations

Detail

Sections
Recommended

/