Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipshitz $\ell_{p}$ Regularization
Xuan Lin , Haidong Xie , Chunlin Wu , Xueshuang Xiang
CSIAM Trans. Appl. Math. ›› 2023, Vol. 4 ›› Issue (4) : 797 -819.
Non-Lipschitz Attack: A More Sparse Adversarial Attack via Non-Lipshitz $\ell_{p}$ Regularization
Deep neural networks are considerably vulnerable to adversarial attacks. Therein, sparse attacks mislead image classifiers with a sparse, pixel-level perturbation that alters few pixels, and have much potential in physical world applications. The existing sparse attacks are mostly based on
Sparse adversarial attack / $\ell_{p}(0<p<1)$ regularization / lower bound theory / support shrinkage / ADMM
/
| 〈 |
|
〉 |