Machine learning model based on non-convex penalized huberized-SVM

Wang Penga,b(), Guo Jia(), Li Lin-Fengc()

Journal of Electronic Science and Technology ›› 2024, Vol. 22 ›› Issue (1) : 100246.

Journal of Electronic Science and Technology ›› 2024, Vol. 22 ›› Issue (1) : 100246. DOI: 10.1016/j.jnlest.2024.100246
Original article

Machine learning model based on non-convex penalized huberized-SVM

  • Wang Penga,b(), Guo Jia(), Li Lin-Fengc()
Author information +
History +

Abstract

The support vector machine (SVM) is a classical machine learning method. Both the hinge loss and least absolute shrinkage and selection operator (LASSO) penalty are usually used in traditional SVMs. However, the hinge loss is not differentiable, and the LASSO penalty does not have the Oracle property. In this paper, the huberized loss is combined with non-convex penalties to obtain a model that has the advantages of both the computational simplicity and the Oracle property, contributing to higher accuracy than traditional SVMs. It is experimentally demonstrated that the two non-convex huberized-SVM methods, smoothly clipped absolute deviation huberized-SVM (SCAD-HSVM) and minimax concave penalty huberized-SVM (MCP-HSVM), outperform the traditional SVM method in terms of the prediction accuracy and classifier performance. They are also superior in terms of variable selection, especially when there is a high linear correlation between the variables. When they are applied to the prediction of listed companies, the variables that can affect and predict financial distress are accurately filtered out. Among all the indicators, the indicators per share have the greatest influence while those of solvency have the weakest influence. Listed companies can assess the financial situation with the indicators screened by our algorithm and make an early warning of their possible financial distress in advance with higher precision.

Keywords

Huberized loss / Machine learning / Non-convex penalties / Support vector machine (SVM) / Huberized loss / Machine learning / Non-convex penalties / Support vector machine (SVM)

Cite this article

Download citation ▾
Wang Peng, Guo Ji, Li Lin-Feng. Machine learning model based on non-convex penalized huberized-SVM. Journal of Electronic Science and Technology, 2024, 22(1): 100246 https://doi.org/10.1016/j.jnlest.2024.100246

References

[1]
A. Roy, S. Chakraborty. Support vector machine in structural reliability analysis: A review, Reliab. Eng. Syst. Safe., 233 (May2023), pp. 109126:1-12.
[2]
M. Tanveer, T. Rajani, R. Rastogi, Y.H. Shao, M.A. Ganaie. Comprehensive review on twin support vector machines. Ann. Oper. Res., 310 (2) (Mar.2022), pp. 1-46, 10.1007/s10479-022-04575-w.
[3]
S.-L. Peng, W.-W. Wang, Y.-L. Chen, X.-L. Zhong, Q.-H. Hu. Regression-based hyperparameter learning for support vector machines. IEEE T. Neur. Net. Lear. (Oct.2023), pp. 1-15, 10.1109/TNNLS.2023.3321685.
[4]
Y.-J. Tian, Y. Shi, X.-H. Liu. Recent advances on support vector machines research. Technol. Econ. Dev. Econ., 18 (1) (May2012), pp. 5-33.
[5]
V.N. Vapnik. The Nature of Statistical Learning Theory. Springer, New York (1995).
[6]
T. Evgeniou, M. Pontil, T. Poggio. Regularization networks and support vector machines. Adv. Comput. Math., 13 (1) (Apr.2000), pp. 1-50.
[7]
J. Zhu, S. Rosset, T. Hastie, R. Tibshirani. 1-norm support vector machines. Proc. of 16th Intl. Conf. on Neural Information Processing Systems (Whistler, 2003), pp. 49-56.
[8]
M. Wegkamp, M. Yuan. Support vector machines with a reject option. Bernoulli, 17 (4) (Nov.2011), pp. 1368-1385.
[9]
J.-Q. Fan, R.-Z. Li. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc., 96 (456) (Feb.2001), pp. 1348-1360.
[10]
H. Zou. The adaptive LASSO and its oracle properties. J. Am. Stat. Assoc., 101 (476) (Dec.2006), pp. 1418-1429.
[11]
N. Meinshausen, B. Yu. LASSO-type recovery of sparse representations for high-dimensional data. Ann. Stat., 37 (1) (Feb.2009), pp. 246-270.
[12]
A. Araveeporn. The higher-order of adaptive LASSO and elastic net methods for classification on high dimensional data. Mathematics, 9 (10) (May2021), pp. 1091:1-14.
[13]
J.H. Lee, Z.-T. Shi, Z. Gao. On LASSO for predictive regression. J. Econom., 229 (2) (Aug.2022), pp. 322-349. View articleGoogle Scholar.
[14]
C.-H. Zhang, J. Huang. The sparsity and bias of the LASSO selection in high-dimensional linear regression. Ann. Stat., 36 (4) (Apr.2008), pp. 1567-1594.
[15]
Y. Kim, H. Choi, H.-S. Oh. Smoothly clipped absolute deviation on high dimensions. J. Am. Stat. Assoc., 103 (484) (Dec.2008), pp. 1665-1673.
[16]
C.-H. Zhang. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat., 38 (2) (Apr.2010), pp. 894-942.
[17]
X. Zhang, Y.-C. Wu, L. Wang, R.-Z. Li. Variable selection for support vector machines in moderately high dimensions. J. R. Stat. Soc., B 78 (1) (Jan.2016), pp. 53-76.
[18]
O.L. Mangasarian. A finite Newton method for classification. Optim. Method. Softw., 17 (5) (Oct.2002), pp. 913-929.
[19]
S. Rosset, J. Zhu. Piecewise linear regularized solution paths. Ann. Stat., 35 (3) (Sept.2007), pp. 1012-1030.
[20]
Y. Yang, H. Zou. An efficient algorithm for computing the HHSVM and its generalizations. J. Comput. Graph. Stat., 22 (2) (Oct.2013), pp. 396-415.
[21]
Y. Yang, H. Zou. A fast unified algorithm for solving group-LASSO penalize learning problems. Stat. Comput., 25 (6) (Nov.2015), pp. 1129-1141.
[22]
K.-N. Fang, P. Wang, X.-C. Zhang, Q.-Z. Zhang. Structured sparse support vector machine with ordered features. J. Appl. Stat., 49 (5) (Nov.2022), pp. 1105-1120.

Accesses

Citations

Detail

Sections
Recommended

/