Model Change Active Learning in Graph-Based Semi-supervised Learning

Kevin S. Miller, Andrea L. Bertozzi

Communications on Applied Mathematics and Computation ›› 2024, Vol. 6 ›› Issue (2) : 1270-1298. DOI: 10.1007/s42967-023-00328-z
Original Paper

Model Change Active Learning in Graph-Based Semi-supervised Learning

Author information +
History +

Abstract

Active learning in semi-supervised classification involves introducing additional labels for unlabelled data to improve the accuracy of the underlying classifier. A challenge is to identify which points to label to best improve performance while limiting the number of new labels. “Model Change” active learning quantifies the resulting change incurred in the classifier by introducing the additional label(s). We pair this idea with graph-based semi-supervised learning (SSL) methods, that use the spectrum of the graph Laplacian matrix, which can be truncated to avoid prohibitively large computational and storage costs. We consider a family of convex loss functions for which the acquisition function can be efficiently approximated using the Laplace approximation of the posterior distribution. We show a variety of multiclass examples that illustrate improved performance over prior state-of-art.

Keywords

Active learning / Graph-based methods / Semi-supervised learning (SSL) / Graph Laplacian

Cite this article

Download citation ▾
Kevin S. Miller, Andrea L. Bertozzi. Model Change Active Learning in Graph-Based Semi-supervised Learning. Communications on Applied Mathematics and Computation, 2024, 6(2): 1270‒1298 https://doi.org/10.1007/s42967-023-00328-z

References

[1.]
Ash, J.T., Zhang, C., Krishnamurthy, A., Langford, J., Agarwal, A.: Deep batch active learning by diverse, uncertain gradient lower bounds. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30 (2020). OpenReview.net
[2.]
Balcan, M.-F., Beygelzimer, A., Langford, J.: Agnostic active learning. In: Proceedings of the 23rd International Conference on Machine Learning. ICML’06, pp. 65–72. Association for Computing Machinery, Pittsburgh, Pennsylvania, USA (2006). https://doi.org/10.1145/1143844.1143853
[3.]
Belkin M, Niyogi P, Sindhwani V. Manifold regularization: a geometric framework for learning from labeled and unlabeled examples. J. Mach. Learn. Res., 2006, 7: 2399-2434
[4.]
Belongie S, Fowlkes C, Chung F, Malik J. Goos G, Hartmanis J, van Leeuwen J, Heyden A, Sparr G, Nielsen M, Johansen P. Spectral partitioning with indefinite kernels using the Nyström extension. Computer Vision, 2002 Berlin, Heidelberg Springer 531-542
[5.]
Bertozzi AL, Flenner A. Diffuse interface models on graphs for classification of high dimensional data. SIAM Rev., 2016, 58(2): 293-328,
CrossRef Google scholar
[6.]
Bertozzi AL, Hosseini B, Li H, Miller K, Stuart AM. Posterior consistency of semi-supervised regression on graphs. Inverse Prob., 2021, 37(10),
CrossRef Google scholar
[7.]
Bertozzi AL, Luo X, Stuart AM, Zygalakis KC. Uncertainty quantification in graph-based classification of high dimensional data. SIAM/ASA J. Uncertain. Quantif., 2018, 6(2): 568-595,
CrossRef Google scholar
[8.]
Bertozzi AL, Merkurjev E. Kimmel R, Tai X-C. Graph-based optimization approaches for machine learning, uncertainty quantification and networks. Processing, Analyzing and Learning of Images, Shapes, and Forms: Part 2 Handbook of Numerical Analysis, 2019 Amsterdam, Netherlands Elsevier 503-531,
CrossRef Google scholar
[9.]
Cai W, Zhang M, Zhang Y. Batch mode active learning for regression with expected model change. IEEE Trans. Neural Netw. Learn. Syst., 2017, 28(7): 1668-1681,
CrossRef Google scholar
[10.]
Cai, W., Zhang, Y., Zhou, J.: Maximizing expected model change for active learning in regression. In: 2013 IEEE 13th International Conference on Data Mining, pp. 51–60 (2013). https://doi.org/10.1109/ICDM.2013.104
[11.]
Calder, J., Cook, B., Thorpe, M., Slepčev, D.: Poisson learning: graph-based semi-supervised learning at very low label rates. In: Proceedings of the 37th International Conference on Machine Learning, pp. 1306–1316. Proceedings of Machine Learning Research, Online (2020)
[12.]
Chaloner K, Verdinelli I. Bayesian experimental design: a review. Stat. Sci., 1995, 10(3): 273-304,
CrossRef Google scholar
[13.]
Chen, B., Miller, K., Bertozzi, A., Schwenk, J.: Graph-based active learning for surface water and sediment detection in multispectral images (2022). Submitted to IEEE Transactions on Geoscience and Remote Sensing
[14.]
Cohn D, Atlas L, Ladner R. Improving generalization with active learning. Mach. Learn., 1994, 15(2): 201-221,
CrossRef Google scholar
[15.]
Dasarathy, G., Nowak, R., Zhu, X.: S2: an efficient graph based active learning algorithm with application to nonparametric classification. In: Conference on Learning Theory, pp. 503–522 (2015)
[16.]
Dasgupta, S., Hsu, D.: Hierarchical sampling for active learning. In: Proceedings of the 25th International Conference on Machine Learning. ICML’08, pp. 208–215. Association for Computing Machinery, Helsinki, Finland (2008). https://doi.org/10.1145/1390156.1390183
[17.]
Fedorov V. . Theory of Optimal Experiments Designs, 1972 Cambridge, Massachusetts, USA Probability and Mathematical Statistics. Academic Press
[18.]
Fowlkes C, Belongie S, Chung F, Malik J. Spectral grouping using the Nystrom method. IEEE Trans. Pattern Anal. Mach. Intell., 2004, 26(2): 214-225,
CrossRef Google scholar
[19.]
Gal, Y., Islam, R., Ghahramani, Z.: Deep Bayesian active learning with image data. In: Proceedings of the 34th International Conference on Machine Learning, pp. 1183–1192. Journal of Machine Learning Research, Sydney, Australia (2017)
[20.]
Garcia-Cardona C, Merkurjev E, Bertozzi AL, Flenner A, Percus AG. Multiclass data segmentation using diffuse interface methods on graphs. IEEE Trans. Pattern Anal. Mach. Intell., 2014, 36(8): 1600-1613,
CrossRef Google scholar
[21.]
García Trillos N, Hoffmann F, Hosseini B. Geometric structure of graph Laplacian embeddings. J. Mach. Learn. Res., 2021, 22(63): 1-55
[22.]
Guillory, A., Bilmes, J.: Interactive submodular set cover. In: Proceedings of the 27th International Conference on Machine Learning (ICML), pp. 415–422 (2010)
[23.]
Hoffmann F, Hosseini B, Ren Z, Stuart AM. Consistency of semi-supervised learning algorithms on graphs: probit and one-hot methods. J. Mach. Learn. Res., 2020, 21(186): 1-55
[24.]
Hoi, S.C.H., Jin, R., Zhu, J., Lyu, M.R.: Semi-supervised SVM batch mode active learning for image retrieval. In: 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2008). https://doi.org/10.1109/CVPR.2008.4587350
[25.]
Houlsby, N., Huszár, F., Ghahramani, Z., Lengyel, M.: Bayesian active learning for classification and preference learning. arXiv:1112.5745 (2011)
[26.]
Ji, M., Han, J.: A variance minimization criterion to active learning on graphs. In: Artificial Intelligence and Statistics, pp. 556–564 (2012)
[27.]
Jiang, B., Zhang, Z., Lin, D., Tang, J., Luo, B.: Semi-supervised learning with graph learning-convolutional networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11305–11312 (2019). https://doi.org/10.1109/CVPR.2019.01157
[28.]
Jiang, H., Gupta, M.R.: Bootstrapping for batch active sampling. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 3086–3096. Association for Computing Machinery, New York, USA (2021). https://doi.org/10.1145/3447548.3467076
[29.]
Jun K-S, Nowak R. Graph-based active learning: a new look at expected error minimization. 2016 IEEE Global Conf. Signal Inform. Process., 2016,
CrossRef Google scholar
[30.]
Karzand M, Nowak RD. MaxiMin active learning in overparameterized model classes. IEEE J. Sel. Areas Inform. Theory, 2020, 1(1): 167-177,
CrossRef Google scholar
[31.]
Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: 5th International Conference on Learning Representations, ICLR (2017)
[32.]
Krause A, Golovin D. Bordeaux L, Hamadi Y, Kohli P, Mateescu R. Submodular function maximization. Tractability, 2013 Cambridge Cambridge University Press 71-104
[33.]
Kushnir, D., Venturi, L.: Diffusion-based deep active learning. arXiv:2003.10339 (2020). Accessed 2020-06-11
[34.]
Lecun, Y., Cortes, C., Burges, C.C.J.: The MNIST Database of Handwritten Digits (2010). http://yann.lecun.com/exdb/mnist/
[35.]
Lewis, D.D., Gale, W.A.: A sequential algorithm for training text classifiers. In: Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. SIGIR’94, pp. 3–12. Springer, Berlin, Heidelberg (1994)
[36.]
Long J, Yin J, Zhao W, Zhu E. Torra V, Narukawa Y. Graph-based active learning based on label propagation. Modeling Decisions for Artificial Intelligence, 2008 Berlin, Heidelberg Springer 179-190,
CrossRef Google scholar
[37.]
Ma Y, Garnett R, Schneider J. Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma$$\end{document}-optimality for active learning on Gaussian random fields. Adv. Neural Inform. Process. Syst., 2013, 26: 2751-2759
[38.]
Ma, Y., Huang, T.-K., Schneider, J.G.: Active search and bandits on graphs using Σ \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Sigma$$\end{document}-optimality. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI) (2015)
[39.]
Maggioni M, Murphy JM. Learning by active nonlinear diffusion. Found. Data Sci., 2019, 1(3): 271,
CrossRef Google scholar
[40.]
Merkurjev E, Kostić T, Bertozzi AL. An MBO scheme on graphs for classification and image processing. SIAM J. Imag. Sci., 2013, 6(4): 1903-1930,
CrossRef Google scholar
[41.]
Miller, K., Li, H., Bertozzi, A.L.: Efficient graph-based active learning with probit likelihood via Gaussian approximations. In: ICML Workshop on Real-World Experiment Design and Active Learning (2020)
[42.]
Mirzasoleiman, B.: Big data summarization using submodular functions. PhD thesis, ETH Zurich (2017)
[43.]
Nemhauser GL, Wolsey LA, Fisher ML. An analysis of approximations for maximizing submodular set functions-i. Math. Program., 1978, 14(1): 265-294,
CrossRef Google scholar
[44.]
Qiao Y-L, Shi CX, Wang C, Li H, Haberland M, Luo X, Stuart AM, Bertozzi AL. Uncertainty quantification for semi-supervised multi-class classification in image processing and ego-motion analysis of body-worn videos. Image Process. Algorithms Syst., 2019,
CrossRef Google scholar
[45.]
Qiao Y, Shi C, Wang C, Li H, Haberland M, Luo X, Stuart AM, Bertozzi AL. Uncertainty quantification for semi-supervised multi-class classification in image processing and ego-motion analysis of body-worn videos. Electron. Imaging, 2019, 2019(11): 264,
CrossRef Google scholar
[46.]
Rasmussen CE, Williams CKI. . Gaussian Processes for Machine Learning, 2006 Cambridge, Mass Adaptive Computation and Machine Learning. MIT Press
[47.]
Sener, O., Savarese, S.: Active learning for convolutional neural networks: a core-set approach. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=H1aIuk-RW
[48.]
Settles, B.: Active Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer Nature, Switzerland (2012). https://doi.org/10.2200/S00429ED1V01Y201207AIM018
[49.]
Siméoni O, Budnik M, Avrithis Y, Gravier G. Rethinking deep active learning: using unlabeled data at model training. Int. Conf. Patt. Recogn. (ICPR), 2021,
CrossRef Google scholar
[50.]
Tong S, Koller D. Support vector machine active learning with applications to text classification. J. Mach. Learn. Res., 2001, 2: 45-66
[51.]
Vapnik, V.N.: Statistical learning theory. In: Wiley Series in Adaptive and Learning Systems for Signal Processing, Communication and Control. Wiley, New York (1998)
[52.]
von Luxburg U. A tutorial on spectral clustering. Stat. Comput., 2007, 17(4): 395-416,
CrossRef Google scholar
[53.]
Williams, C.K.I., Seeger, M.: Using the Nyström method to speed Up kernel machines. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems 13, pp. 682–688. MIT Press (2001). http://papers.nips.cc/paper/1866-using-the-nystrom-method-to-speed-up-kernel-machines.pdf Accessed 2020-07-24
[54.]
Zelnik-Manor, L., Perona, P.: Self-tuning spectral clustering. In: Advances in Neural Information Processing Systems 17, pp. 1601–1608. MIT Press, Cambridge, Massachusetts, USA (2004)
[55.]
Zhu, X., Ghahramani, Z., Lafferty, J.: Semi-supervised learning using Gaussian fields and harmonic functions. In: Proceedings of the Twentieth International Conference on International Conference on Machine Learning. ICML’03, pp. 912–919. AAAI Press, Washington, DC, USA (2003a)
[56.]
Zhu, X., Lafferty, J., Ghahramani, Z.: Combining active learning and semi-supervised learning using Gaussian fields and harmonic functions. In: ICML 2003 Workshop on the Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pp. 58–65 (2003b)
Funding
U.S. Department of Defense(National Defense Science and Engineering Graduate (NDSEG) Research Fellowship); National Geospatial-Intelligence Agency(HM04762110003)

Accesses

Citations

Detail

Sections
Recommended

/