Adversarial image detection based on the maximum channel of saliency maps

Haoran Fu , Chundong Wang , Hao Lin , Qingbo Hao

Optoelectronics Letters ›› 2022, Vol. 18 ›› Issue (5) : 307 -312.

PDF
Optoelectronics Letters ›› 2022, Vol. 18 ›› Issue (5) : 307 -312. DOI: 10.1007/s11801-022-1157-z
Article

Adversarial image detection based on the maximum channel of saliency maps

Author information +
History +
PDF

Abstract

Studies have shown that deep neural networks (DNNs) are vulnerable to adversarial examples (AEs) that induce incorrect behaviors. To defend these AEs, various detection techniques have been developed. However, most of them only appear to be effective against specific AEs and cannot generalize well to different AEs. We propose a new detection method against AEs based on the maximum channel of saliency maps (MCSM). The proposed method can alter the structure of adversarial perturbations and preserve the statistical properties of images at the same time. We conduct a complete evaluation on AEs generated by 6 prominent adversarial attacks on the ImageNet large scale visual recognition challenge (ILSVRC) 2012 validation sets. The experimental results show that our method performs well on detecting various AEs.

Cite this article

Download citation ▾
Haoran Fu, Chundong Wang, Hao Lin, Qingbo Hao. Adversarial image detection based on the maximum channel of saliency maps. Optoelectronics Letters, 2022, 18(5): 307-312 DOI:10.1007/s11801-022-1157-z

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[C]//2015 International Conference on Learning Representations (ICLR), May 7–9, 2015, San Diego, CA, USA. CoRR, 2015: abs/1409.1556.

[2]

SZEGEDY C, ZARERBA W, SUTSKEVER I, et al. Intriguing properties of neural networks[C]//2014 International Conference on Learning Representations (ICLR poster), April 14–16, 2014, Banff, Canada. CoRR, 2014: abs/1312.6199.

[3]

ZhangS S, ZuoX, LiuJ W. The problem of the adversarial examples in deep learning[J]. Chinese journal of computers, 2019, 42(08):1886-1904

[4]

WangX M, LiJ, KuangX H, et al.. The security of machine learning in an adversarial setting: a survey[J]. Journal of parallel and distributed computing, 2019, 130: 12-23

[5]

SerbanA, PollE, VisserJ. Adversarial examples on object recognition: a comprehensive survey[J]. ACM computing surveys, 2020, 53(3):1-38

[6]

GROSSE K, MANOHARAN P, PAPERNOT N, et al. On the (statistical) detection of adversarial examples[EB/OL]. (2017-02-21) [2021-11-12]. https://arxiv.org/pdf/1702.06280.pdf.

[7]

KherchoucheA, FezzaS A, HamidoucheW, et al.. Detection of adversarial examples in deep neural networks with natural scene statistics, 2020, New York, IEEE: 9206956

[8]

LiangB, LiH C, SuM Q, et al.. Detecting adversarial image examples in deep neural networks with adaptive noise reduction[J]. IEEE transactions on dependable and secure computing, 2021, 18(1):72-85

[9]

XU L, EVANS D, QI Y J, et al. Feature squeezing: detecting adversarial examples in deep neural networks[C]//2018 Conference on Network and Distributed System Security, February 18–21, 2018, San Diego, CA, USA. CoRR, 2018: abs/1704.01155.

[10]

CaiP, QuanH M. Face anti-spoofing algorithm combined with CNN and brightness equalization[J]. Journal of Central South University, 2021, 28(1):194-204

[11]

SIMONYAN K, VEDALDI A, ZISSERMAN A. Deep inside convolutional networks: visualising image classification models and saliency maps[C]//2014 International Conference on Learning Representations (ICLR poster), April 14–16, 2014, Banff, Canada. CoRR, 2014: abs/1312.6034.

[12]

CarliniN, WagnerD. Towards evaluating the robustness of neural networks, 2017, New York, IEEE: 39-57

[13]

MoosaviS M, FawziA, FrossardP. Deepfool: a simple and accurate method to fool deep neural networks, 2016, New York, IEEE: 16526893

[14]

KURAKIN A, GOODFELLOW I J, BENGIO S. Adversarial examples in the physical world[C]//2017 International Conference on Learning Representations (ICLR), April 24–26, 2017, Toulon, France. CoRR, 2017: abs/1607.02533.

[15]

GOODFELLOW I L, SHLENS J, SZEGEDY C. Explaining and harnessing adversarial examples[C]//2015 International Conference on Learning Representations (ICLR), May 7–9, 2015, San Diego, CA, USA. CoRR, 2015: abs/1412.6572.

[16]

CARLINI N, WAGNER D. MagNet and “efficient defenses against adversarial attacks” are not robust to adversarial examples[EB/OL]. (2017-11-22) [2021-11-12]. https://arxiv.org/abs/1711.08478.

AI Summary AI Mindmap
PDF

159

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/