Adversarial attacks against dynamic graph neural networks via node injection

Yanan Jiang , Hui Xia

High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100185

PDF (523KB)
High-Confidence Computing ›› 2024, Vol. 4 ›› Issue (1) : 100185 DOI: 10.1016/j.hcc.2023.100185
Research Articles
research-article

Adversarial attacks against dynamic graph neural networks via node injection

Author information +
History +
PDF (523KB)

Abstract

Dynamic graph neural networks (DGNNs) have demonstrated their extraordinary value in many practical applications. Nevertheless, the vulnerability of DNNs is a serious hidden danger as a small disturbance added to the model can markedly reduce its performance. At the same time, current adversarial attack schemes are implemented on static graphs, and the variability of attack models prevents these schemes from transferring to dynamic graphs. In this paper, we use the diffused attack of node injection to attack the DGNNs, and first propose the node injection attack based on structural fragility against DGNNs, named Structural Fragility-based Dynamic Graph Node Injection Attack (SFIA). SFIA firstly determines the target time based on the period weight. Then, it introduces a structural fragile edge selection strategy to establish the target nodes set and link them with the malicious node using serial inject. Finally, an optimization function is designed to generate adversarial features for malicious nodes. Experiments on datasets from four different fields show that SFIA is significantly superior to many comparative approaches. When the graph is injected with 1% of the original total number of nodes through SFIA, the link prediction Recall and MRR of the target DGNN link decrease by 17.4% and 14.3% respectively, and the accuracy of node classification decreases by 8.7%.

Keywords

Dynamic graph neural network / Adversarial attack / Malicious node / Vulnerability

Cite this article

Download citation ▾
Yanan Jiang, Hui Xia. Adversarial attacks against dynamic graph neural networks via node injection. High-Confidence Computing, 2024, 4(1): 100185 DOI:10.1016/j.hcc.2023.100185

登录浏览全文

4963

注册一个新账户 忘记密码

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (NSFC) (62172377, 61872205), the Shandong Provincial Natural Science Foundation, China (ZR2019MF018), and the Startup Research Foundation for Distinguished Scholars (202112016).

References

[1]

F. Manessi, A. Rozza, M. Manzo, Dynamic graph convolutional networks, Pattern Recognit. 97 (2020) 107000.

[2]

A. Pareja, G. Domeniconi, J. Chen, T. Ma, T. Suzumura, H. Kanezashi, T. Kaler, T. Schardl, C. Leiserson, Evolvegcn: Evolving graph convolutional networks for dynamic graphs,in:Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, 2020, pp. 5363-5370.

[3]

H. Fan, B. Wang, P. Zhou, A. Li, Z. Xu, C. Fu, H. Li, Y. Chen, Reinforcement learning-based black-box evasion attacks to link prediction in dynamic graphs, in: 2021 IEEE 23rd Int Conf on High Performance Computing & Communications; 7th Int Conf on Data Science & Systems; 19th Int Conf on Smart City; 7th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), IEEE, 2021, pp. 933-940.

[4]

W. Jin, Y. Li, H. Xu, Y. Wang, S. Ji, C. Aggarwal, J. Tang, Adversarial attacks and defenses on graphs, ACM SIGKDD Explor. Newsl. 22 (2) (2021) 19-34.

[5]

Y. Sun, S. Wang, X. Tang, T.-Y. Hsieh, V. Honavar, Adversarial attacks on graph neural networks via node injections: A hierarchical reinforcement learning approach,in:Proceedings of the Web Conference 2020, 2020, pp. 673-683.

[6]

S. Tao, Q. Cao, H. Shen, J. Huang, Y. Wu, X. Cheng, Single node injection attack against graph neural networks, in:Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 1794-1803.

[7]

J. Wang, M. Luo, F. Suya, J. Li, Z. Yang, Q. Zheng, Scalable attack on graph data by injecting vicious nodes, Data Min. Knowl. Discov. 34 (2020) 1363-1389.

[8]

X. Wang, M. Cheng, J. Eaton, C.-J. Hsieh, F. Wu, Attack graph convolutional networks by adding fake nodes, 2018, arXiv preprint arXiv:1810.10751.

[9]

J. Chen, X. Wang, X. Xu, GC-LSTM: Graph convolution embedded LSTM for dynamic network link prediction, Appl. Intell. (2022) 1-16.

[10]

R.C. Staudemeyer, E.R. Morris, Understanding LSTM-a tutorial into long short-term memory recurrent neural networks, 2019, arXiv preprint arXiv: 1909.09586.

[11]

Z. Cai, Z. He, X. Guan, Y. Li, Collective data-sanitization for preventing sensitive information inference attacks in social networks, IEEE Trans. Dependable Secur. Comput. 15 (4) (2016) 577-590.

[12]

Z. Cai, Z. Xiong, H. Xu, P. Wang, W. Li, Y. Pan, Pan, Generative adversarial networks: A survey toward private and secure applications, ACM Comput. Surv. (CSUR) 54 (6) (2021) 1-38.

[13]

K. Li, G. Luo, Y. Ye, W. Li, S. Ji, Z. Cai, Adversarial privacy-preserving graph embedding against inference attack, IEEE Int. Things J. 8 (8) (2020) 6904-6915.

[14]

Z. Xiong, H. Xu, W. Li, Z. Cai, Multi-source adversarial sample attack on autonomous vehicles, IEEE Trans. Veh. Technol. 70 (3) (2021) 2822-2835.

[15]

H. Dai, H. Li, T. Tian, X. Huang, L. Wang, J. Zhu, L. Song, Adversarial attack on graph structured data, in:International Conference on Machine Learning, PMLR, 2018, pp. 1115-1124.

[16]

D. Zügner, A. Akbarnejad, S. Günnemann, Adversarial attacks on neural networks for graph data, in:Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018, pp. 2847-2856.

[17]

D. Zügner, S. Günnemann, Adversarial attacks on graph neural networks via meta learning, 2019.

[18]

H. Wu, C. Wang, Y. Tyshetskiy, A. Docherty, K. Lu, L. Zhu, Adversarial examples on graph data: Deep insights into attack and defense, 2019, arXiv preprint arXiv:1903.01610.

[19]

B. Wang, Y. Li, P. Zhou, Bandits for structure perturbation-based black-box attacks to graph neural networks with theoretical guarantees, in:Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 13379-13387.

[20]

X. He, J. Jia, M. Backes, N.Z. Gong, Y. Zhang, Stealing links from graph neural networks, in:30th USENIX Security Symposium (USENIX Security 21), 2021, pp. 2669-2686.

[21]

J. Ma, J. Deng, Q. Mei, Adversarial attack on graph neural networks as an influence maximization problem, in:Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining, 2022, pp. 675-685.

[22]

J. Xu, Y. Sun, X. Jiang, Y. Wang, C. Wang, J. Lu, Y. Yang, Blindfolded attackers still threatening: Strict black-box adversarial attacks on graphs,in:Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, 2022, pp. 4299-4307.

[23]

L. Lin, E. Blaser, H. Wang, Graph structural attack by perturbing spectral distance, in:Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 989-998.

[24]

Z. Zhang, M. Chen, M. Backes, Y. Shen, Y. Zhang, Inference attacks against graph neural networks, in:31st USENIX Security Symposium (USENIX Security 22), 2022, pp. 4543-4560.

[25]

J. Li, T. Xie, L. Chen, F. Xie, X. He, Z. Zheng, Adversarial attack on large scale graph, IEEE Trans. Knowl. Data Eng. 35 (1) (2021) 82-95.

[26]

H. Zhang, B. Wu, X. Yang, C. Zhou, S. Wang, X. Yuan, S. Pan, Projective ranking: A transferable evasion attack method on graph neural networks,in:Proceedings of the 30th ACM International Conference on Information & Knowledge Management, 2021, pp. 3617-3621.

[27]

X. Zang, Y. Xie, J. Chen, B. Yuan, Graph universal adversarial attacks: A few bad actors ruin graph learning models, 2020, arXiv preprint arXiv: 2002.04784.

[28]

X. Wan, H. Kenlay, B. Ru, A. Blaas, M.A. Osborne, X. Dong, Adversarial attacks on graph classification via bayesian optimisation, 2021, arXiv preprint arXiv:2111.02842.

[29]

X. Zou, Q. Zheng, Y. Dong, X. Guan, E. Kharlamov, J. Lu, J. Tang, Tdgia: Effective injection attacks on graph neural networks,in:Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 2461-2471.

[30]

M. Ju, Y. Fan, C. Zhang, Y. Ye, Let graph be the go board: gradient-free node injection attack for graph neural networks via reinforcement learning,in:Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, 2023, pp. 4383-4390.

[31]

J. Fang, H. Wen, J. Wu, Q. Xuan, Z. Zheng, C.K. Tse, GANI: Global attacks on graph neural networks via imperceptible node injections, 2022, arXiv preprint arXiv:2210.12598.

[32]

S. Tao, Q. Cao, H. Shen, Y. Wu, L. Hou, F. Sun, X. Cheng, Adversarial camouflage for node injection attack on graphs, 2022, arXiv preprint arXiv:2208.01819.

[33]

A.K. Sharma, R. Kukreja, M. Kharbanda, T. Chakraborty, Node injection for class-specific network poisoning, 2023, arXiv preprint arXiv:2301.12277.

[34]

Y. Chen, H. Yang, Y. Zhang, K. Ma, T. Liu, B. Han, J. Cheng, Understanding and improving graph injection attack by promoting unnoticeability, 2022, arXiv preprint arXiv:2202.08057.

[35]

K. Sharma, R. Trivedi, R. Sridhar, S. Kumar, Imperceptible adversarial attacks on discrete-time dynamic graph models, in:NeurIPS 2022 Temporal Graph Learning Workshop, 2022.

[36]

J. Du, S. Zhang, G. Wu, J.M. Moura, S. Kar, Topology adaptive graph convolutional networks, 2017, arXiv preprint arXiv:1710.10370.

[37]

F. Wu, A. Souza, T. Zhang, C. Fifty, T. Yu, K. Weinberger, Simplifying graph convolutional networks, in:International Conference on Machine Learning, PMLR, 2019, pp. 6861-6871.

[38]

D. Zhu, Z. Zhang, P. Cui, W. Zhu, Robust graph convolutional networks against adversarial attacks, in:Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1399-1407.

[39]

Y. Ma, Z. Guo, Z. Ren, J. Tang, D. Yin, Streaming graph neural networks, in:Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 2020, pp. 719-728.

[40]

J. Kunegis, Konect: the koblenz network collection,in:Proceedings of the 22nd International Conference on World Wide Web, 2013, pp. 1343-1350.

[41]

S. Kumar, B. Hooi, D. Makhija, M. Kumar, C. Faloutsos, V. Subrahmanian, Rev2: Fraudulent user prediction in rating platforms,in:Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, 2018, pp. 333-341.

[42]

S. Kumar, X. Zhang, J. Leskovec, Predicting dynamic embedding trajectory in temporal interaction networks, in:Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2019, pp. 1269-1278.

[43]

S. Kumar, W.L. Hamilton, J. Leskovec, D. Jurafsky, Community interaction and conflict on the web, in:Proceedings of the 2018 World Wide Web Conference, 2018, pp. 933-943.

[44]

J. Ma, S. Ding, Q. Mei, Towards more practical adversarial attacks on graph neural networks, Adv. Neural Inf. Process. Syst. 33 (2020) 4756-4766.

[45]

I.-C. Hsieh, C.-T. Li, Netfense: Adversarial defenses against privacy attacks on neural networks for graph data, IEEE Trans. Knowl. Data Eng. (2021).

AI Summary AI Mindmap
PDF (523KB)

241

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/