Support vector machine incremental learning triggered by wrongly predicted samples

Ting-long Tang , Qiu Guan , Yi-rong Wu

Optoelectronics Letters ›› : 232 -235.

PDF
Optoelectronics Letters ›› : 232 -235. DOI: 10.1007/s11801-018-7254-3
Article

Support vector machine incremental learning triggered by wrongly predicted samples

Author information +
History +
PDF

Abstract

According to the classic Karush-Kuhn-Tucker (KKT) theorem, at every step of incremental support vector machine (SVM) learning, the newly adding sample which violates the KKT conditions will be a new support vector (SV) and migrate the old samples between SV set and non-support vector (NSV) set, and at the same time the learning model should be updated based on the SVs. However, it is not exactly clear at this moment that which of the old samples would change between SVs and NSVs. Additionally, the learning model will be unnecessarily updated, which will not greatly increase its accuracy but decrease the training speed. Therefore, how to choose the new SVs from old sets during the incremental stages and when to process incremental steps will greatly influence the accuracy and efficiency of incremental SVM learning. In this work, a new algorithm is proposed to select candidate SVs and use the wrongly predicted sample to trigger the incremental processing simultaneously. Experimental results show that the proposed algorithm can achieve good performance with high efficiency, high speed and good accuracy.

Cite this article

Download citation ▾
Ting-long Tang, Qiu Guan, Yi-rong Wu. Support vector machine incremental learning triggered by wrongly predicted samples. Optoelectronics Letters 232-235 DOI:10.1007/s11801-018-7254-3

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

AdeR, DeshmukhP R. International Journal of Computer Applications, 2014, 2: 1039

[2]

YuanY, FangJ, WangQ. IEEE Transactions on Cybernetics, 2015, 45: 548

[3]

WahabO A, MouradA, OtrokH, BentaharJ. Expert Systems with Applications, 2016, 50: 40

[4]

ChuM, ZhaoJ, LiuX, GongR. Chemometrics and Intelligent Laboratory Systems, 2017, 168: 15

[5]

NilashiM, IbrahimO B, MardaniA, AhaniA, JusohA. A Soft Computing Approach for Diabetes Disease Classification, Health Informatics Journal, 2016,

[6]

MarkovićI, StojanovićM, BožićM, StankovićJ. ICT Innovations, 2014, 105

[7]

KarasuyamaM, TakeuchiI. Advances in Neural Information Processing Systems, 2009, 907

[8]

CauwenberghsG, PoggioT. Incremental and Decremental Support Vector Machine Learning, 2000, 388

[9]

BordesA, ErtekinS, WestonJ, BottouL. Journal of Machine Learning Research, 2005, 6: 1579

[10]

ErtekinS, BottouL, Lee GilesC. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2011, 33: 368

[11]

LiC, LiuK, WangH. Applied Intelligence, 2011, 34: 19

[12]

GuB, ShengV S, WangZ, HoD, OsmanS, LiS. Neural Networks, 2015, 67: 140

[13]

YiY, WuJ, XuW. Expert Systems with Applications, 2011, 38: 7698

[14]

ChitrakarR, HuangC. Computers & Security, 2014, 45: 231

[15]

GaoF, MeiJ, SunJ, WangJ, YangE, HussainA. Plos One, 2015, 10: e0135709

[16]

ChengW-Y, JuangC-F. Fuzzy Sets and Systems, 2011, 163: 24

[17]

YangY, CheJ, LiY, ZhaoY, ZhuS. Energy, 2016, 113: 796

[18]

ChangC-C, LinC-J. ACM Transactions on Intelligent Systems and Technology, 2001, 2: 1

[19]

PangS, ZhuL, ChenG, SarrafzadehA, BanT, InoueD. Neural Networks, 2013, 44: 87

[20]

BacheK, LichmanM. UCI Machine Learning Repository, 2013, CA, University of California, School of Information and Computer Science

AI Summary AI Mindmap
PDF

94

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/