Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  百年期刊
Journal of Tsinghua University(Science and Technology)    2024, Vol. 64 Issue (1) : 13-24     DOI: 10.16511/j.cnki.qhdxxb.2024.21.002
BIG DATA |
Robust verification of graph contrastive learning based on node feature adversarial attacks
XING Yujie1, WANG Xiao2, SHI Chuan1, HUANG Hai1, CUI Peng3
1. School of Computer, Beijing University of Posts and Telecommunications, Beijing 100876, China;
2. School of Software, Beihang University, Beijing 102206, China;
3. Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
Download: PDF(4793 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  [Objective] Many recent studies have indicated that graph neural networks exhibit a lack of robustness when facing adversarial attacks involving perturbations in both graph structures and node features, and the subsequent predictions of these networks may become unreliable under such circumstances. This issue affects graph contrastive learning methods similarly. However, the existing evaluation of robustness methods is often entangled with attack algorithms, data labels, and downstream tasks, which are best avoided, especially within the self-supervised setup of graph contrastive learning. Therefore, this paper introduces a robustness verification algorithm for graph contrastive learning to assess the robustness of graph convolutional networks against node feature adversarial attacks.[Methods] To begin with, considering the nature of positive and negative pairs found in graph contrastive learning models, this paper defines the robustness verification problem of graph contrastive learning as a similarity comparison between adversarial samples and the target node along with its negative samples. This problem is then expressed as a dynamic programming problem, which avoids dependency on attack algorithms, data labels, and downstream tasks. To address this dynamic programming problem, a series of novel and effective methods are proposed in this paper. For the binary attributes commonly used in graph data, corresponding perturbation spaces are therefore constructed here. Considering the challenge posed by a large negative sample space in graph contrastive learning, a negative sample sampling strategy is designed to improve the efficiency of problem-solving. In cases where binary discrete attributes and nonlinear activation functions render the dynamic programming problem difficult to address, this paper employs relaxation techniques and uses dual problem optimization methods to further improve the solution's efficiency.[Results] To assess the effectiveness of the proposed graph contrastive learning robustness verification algorithm, we conducted experiments using the classic GRACE model on the Cora and CiteSeer datasets. Employing the robustness verification algorithm introduced for graph contrastive learning, we evaluated its robustness. As the perturbation intensity increased, the proportion of nodes that were verified as robust decreased rapidly, and the proportion of nodes that were verified as non-robust increased significantly. Simultaneously, the proportion of unverifiable nodes remained at a lower level. These observations show the effectiveness of the proposed framework for verifying the robustness of graph contrastive learning. Additionally, the experiments revealed that the robustness of the contrastive learning model ARIEL, designed against specific attack algorithms, lacks generalizability and exhibits poor verifiable robustness performance, suggesting its vulnerability to other attack algorithms. Besides, ablation experiments identified the adversarial attack components of ARIEL as the main reason for its diminished verifiable robustness. Lastly, parameter experiments demonstrated the reasonability of the proposed negative sample sampling strategy. The results showed that sampling 20 negative samples is sufficient to achieve a favorable performance of our robustness verification algorithm with high efficiency.[Conclusions] Through the analysis of our methods and experimental results, the graph contrastive learning robustness verification algorithm proposed in this study not only eliminates dependency on attack algorithms, data labels, and downstream tasks but also presents a more comprehensive measurement compared to traditional robustness metrics. It can verify robustness in multiple directions, thereby boosting the development of comprehensively robust graph contrastive learning algorithms.
Keywords graph contrastive learning      graph convolution network      robustness verification      adversarial attack      adversarial training     
Issue Date: 30 November 2023
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
XING Yujie
WANG Xiao
SHI Chuan
HUANG Hai
CUI Peng
Cite this article:   
XING Yujie,WANG Xiao,SHI Chuan, et al. Robust verification of graph contrastive learning based on node feature adversarial attacks[J]. Journal of Tsinghua University(Science and Technology), 2024, 64(1): 13-24.
URL:  
http://jst.tsinghuajournals.com/EN/10.16511/j.cnki.qhdxxb.2024.21.002     OR     http://jst.tsinghuajournals.com/EN/Y2024/V64/I1/13
  
  
  
  
  
  
  
  
  
  
  
[1] ABU-EL-HAIJA S, PEROZZI B, KAPOOR A, et al. Mixhop:Higher-order graph convolutional architectures via sparsified neighborhood mixing[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach, CA, USA:PMLR, 2019:21-29.
[2] WU J, HE J R, XU J J. DEMO-Net:Degree-specific graph neural networks for node and graph classification[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, USA:ACM, 2019:406-415.
[3] KIPF T N, WELLING M. Variational graph auto-encoders[EB/OL]. (2016-11-21). https://arxiv.org/abs/1611.07308.
[4] YOU J X, YING R, LESKOVEC J. Position-aware graph neural networks[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach, CA, USA:PMLR, 2019:7134-7143.
[5] GAO H Y, JI S W. Graph u-nets[C]//Proceedings of the 36th International Conference on Machine Learning. Long Beach, CA, USA:PMLR, 2019:2083-2092.
[6] ZHANG M H, CUI Z C, NEUMANN M, et al. An end-to-end deep learning architecture for graph classification[C]//Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA:AAAI, 2018:4438-4445.
[7] VELI AČG KOVI AĆG P, FEDUS W, HAMILTON W L, et al. Deep graph infomax[C]//7th International Conference on Learning Representations. New Orleans, LA, USA:OpenReview.net, 2019.
[8] HASSANI K, KHASAHMADI A H. Contrastive multi-view representation learning on graphs[C]//37th International Conference on Machine Learning. Vienna, Austria:PMLR, 2020:4116-4126.
[9] ZHU Y Q, XU Y C, YU F, et al. 2020. Deep graph contrastive representation learning[EB/OL]. (2020-07-13). https://arxiv.org/abs/2006.04131.
[10] ZVGNER D, AKBARNEJAD A, GüNNEMANN S. Adversarial attacks on neural networks for graph data[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. London, UK:ACM, 2018:2847-2856.
[11] ZüGNER D, GüNNEMANN S. Adversarial attacks on graph neural networks via meta learning[C]//7th International Conference on Learning Representations. New Orleans, USA:OpenReview.net, 2019.
[12] PEROZZI B, AL-RFOU R, SKIENA S. DeepWalk:Online learning of social representations[C]//Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, USA:ACM, 2014:701-710.
[13] GROVER A, LESKOVEC J. Node2vec:Scalable feature learning for networks[C]//Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, USA:ACM, 2016:855-864.
[14] HJELM R D, FEDOROV A, LAVOIE-MARCHILDON S, et al. Learning deep representations by mutual information estimation and maximization[C]//7th International Conference on Learning Representations. New Orleans, LA, USA:OpenReview.net, 2019.
[15] PENG Z, HUANG W B, LUO M N, et al. Graph representation learning via graphical mutual information maximization[C]//Proceedings of The Web Conference. Taipei, China:ACM, 2020:259-270.
[16] CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[C]//Proceedings of the 37th International Conference on Machine Learning. Vienna, Austria:PMLR, 2020:1597-1607.
[17] HE K M, FAN H Q, WU Y X, et al. Momentum contrast for unsupervised visual representation learning[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, USA:IEEE, 2020:9729-9738.
[18] ZHU Y Q, XU Y C, YU F, et al. Graph contrastive learning with adaptive augmentation[C]//Proceedings of the Web Conference. Ljubljana, Slovenia:ACM, 2021:2069-2080.
[19] LIU N, WANG X, BO D Y, et al. Revisiting graph contrastive learning from the perspective of graph spectrum[C]//Proceedings of the 36th International Conference on Neural Information Processing Systems. New Orleans, Louisiana, USA:MIT Press, 2022.
[20] LIN L, CHEN J H, WANG H N. Spectral augmentation for self-supervised learning on graphs[C]//The Eleventh International Conference on Learning Representations. Kigali, Rwanda:OpenReview.net, 2023.
[21] HE D X, ZHAO J T, GUO R, et al. Contrastive learning meets homophily:Two birds with one stone[C]//Proceedings of the 40th International Conference on Machine Learning. Honolulu, Hawaii, USA:PMLR, 2023.
[22] ZüGNER D, GüNNEMANN S. Certifiable robustness and robust training for graph convolutional networks[C]//Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. Anchorage, USA:ACM, 2019:246-256.
[23] BOJCHEVSKI A, GüNNEMANN S. Certifiable robustness to graph perturbations[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems. New York, USA:ACM, 2019:8319-8330.
[24] JIN H W, SHI Z, PERURI A, et al. Certified robustness of graph convolution networks for graph classification under topological attacks[C]//Proceedings of the 34th International Conference on Neural Information Processing Systems. Vancouver, Canada:ACM, 2020:8463-8474.
[25] WANG B H, JIA J Y, CAO X Y, et al. Certified robustness of graph neural networks against adversarial structural perturbation[C]//Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. New York, USA:ACM, 2021:1645-1653.
[26] OSSELIN P, KENLAY H, DONG X W. Structure-aware robustness certificates for graph classification[C]//Uncertainty in Artificial Intelligence. Pittsburgh, PA, USA:PMLR, 2023.
[27] FENG F L, HE X N, TANG J, et al. Graph adversarial training:Dynamically regularizing based on graph structure[J]. IEEE Transactions on Knowledge and Data Engineering, 2021, 33(6):2493-2504.
[28] WU C, WANG C K, XU J C, et al. Graph contrastive learning with generative adversarial network[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. Long Beach, USA:ACM, 2023:2721-2730.
[29] FENG S Y, JING B Y, ZHU Y D, et al. Adversarial graph contrastive learning with information regularization[C]//Proceedings of the ACM Web Conference 2022. Lyon, France:ACM, 2022:1362-1371.
[30] MADRY A, MAKELOV A, SCHMIDT L, et al. Towards deep learning models resistant to adversarial attacks[C]//6th International Conference on Learning Representations. Vancouver, BC, Canada:OpenReview.net, 2018.
[31] WANG Z K, LIU W W. Robustness verification for contrastive learning[C]//Proceedings of the 39th International Conference on Machine Learning. Baltimore, Maryland, USA:PMLR, 2022:22865-22883.
[32] WONG E, KOLTER Z. Provable defenses against adversarial examples via the convex outer adversarial polytope[C]//Proceedings of the 35th International Conference on Machine Learning. Stockholmsmässan, Stockholm, Sweden:PMLR, 2018:5283-5292.
[33] MCCALLUM A K, NIGAM K, RENNIE J, et al. Automating the construction of internet portals with machine learning[J]. Information Retrieval, 2000, 3(2):127-163.
[34] SEN P, NAMATA G, BILGIC M, et al. Collective classification in network data[J]. AI Magazine, 2008, 29(3):93-106.
No related articles found!
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd