Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  百年期刊
Journal of Tsinghua University(Science and Technology)    2024, Vol. 64 Issue (1) : 1-12     DOI: 10.16511/j.cnki.qhdxxb.2024.21.001
BIG DATA |
Two-stage fusion multiview graph clustering based on the attention mechanism
ZHAO Xingwang1,2, HOU Zhedong1,2, YAO Kaixuan1,2, LIANG Jiye1,2
1. School of Computer and Information Technology, Shanxi University, Taiyuan 030006, China;
2. Key Laboratory of Computational Intelligence and Chinese Information Processing Ministry of Education(Shanxi University), Taiyuan 030006, China
Download: PDF(8474 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  [Objective] Multiview graph clustering aims to investigate the inherent cluster structures in multiview graph data and has received quite extensive research attention over recent years. However, there are differences in the final quality of different views, but existing methods treat all views equally during the fusion process without assigning the corresponding weights based on the received quality of the view. This may result in the loss of complementary information from multiple views and go on to ultimately affect the clustering quality. Additionally, the topological structure and attribute information of nodes in multiview graph data differ significantly in terms of content and form, making it somewhat challenging to integrate these two types of information effectively. To solve these problems, this paper proposes two-stage fusion multiview graph clustering based on an attention mechanism.[Methods] The algorithm can be divided into three stages:feature filtering based on graph filtering, feature fusion based on the attention mechanism, and topological fusion based on the attention mechanism. In the first stage, graph filters are applied to combine the attribute information with the topological structure of each view. In this process, a smoother embedding representation is achieved by filtering out high-frequency noise. In the second stage, the smooth representations of individual views are fused using attention mechanisms to obtain the consensus smooth representation, which incorporates information from all views. Additionally, a consensus Laplacian matrix is obtained by combining multiple views' Laplacian matrices using learnable weights. To obtain the final embedded representation, the consensus Laplacian matrix and consensus smooth representation are inputted into an encoder. Subsequently, the similarity matrix for the final embedded representation is computed. Training samples are selected from the similarity matrix, and the embedded representation and learnable weights of the Laplacian matrix are optimized iteratively to obtain a somewhat more compressed embedded representation. Finally, performing spectral clustering on the embedding representation yields the clustering results. The performance of the algorithm is evaluated using widely-used clustering evaluation metrics, including accuracy, normalized mutual information, an adjusted Rand index, and an F1-score, on three datasets:Association for Computing Machinery (ACM), Digital Bibliography & Library Project (DBLP), and Internet Movie Database (IMDB).[Results] 1) The experimental results show that the proposed algorithm is more effective in handling multiview graph data, particularly for the ACM and DBLP datasets, compared to extant methods. However, it may not perform as well as LMEGC and MCGC on the IMDB dataset. 2) Through the exploration of view quality using the proposed methods, the algorithm can learn weights specific to each view based on quality. 3) Compared to the best-performing single view on each dataset (ACM, DBLP, and IMDB), the proposed algorithm achieves an average performance improvement of 2.4%, 2.9%, and 2.1%, respectively, after fusing all views. 4) Exploring the effect of the number of graph filter layers and the ratio of positive to negative node pairs on the performance of the algorithm, it was found that the best performance was achieved with somewhat small graph filter layers. The optimal ratio for positive and negative node pairs was around 0.01 and 0.5.[Conclusions] The algorithm combines attribute information with topological information through graph filtering to obtain smoother representations that are more suitable for clustering. The attention mechanisms can learn weights from both the topological and attribute information perspectives based on view quality. In this way, the representation could get the information from each view while avoiding the influence of poor-quality views. The proposed method in this paper achieves the expected results, greatly enhancing the clustering performance of the algorithm.
Keywords multiview learning      graph clustering      attention mechanism      graph learning      embedded representation     
Issue Date: 30 November 2023
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
ZHAO Xingwang
HOU Zhedong
YAO Kaixuan
LIANG Jiye
Cite this article:   
ZHAO Xingwang,HOU Zhedong,YAO Kaixuan, et al. Two-stage fusion multiview graph clustering based on the attention mechanism[J]. Journal of Tsinghua University(Science and Technology), 2024, 64(1): 1-12.
URL:  
http://jst.tsinghuajournals.com/EN/10.16511/j.cnki.qhdxxb.2024.21.001     OR     http://jst.tsinghuajournals.com/EN/Y2024/V64/I1/1
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
  
[1] 刘金花, 王洋, 钱宇华. 基于谱结构融合的多视图聚类[J]. 计算机研究与发展, 2022, 59(4):922-935. LIU J H, WANG Y, QIAN Y H. Multi-view clustering with spectral structure fusion[J]. Journal of Computer research and Development, 2022, 59(4):922-935. (in Chinese)
[2] 刘晓琳, 白亮, 赵兴旺, 等. 基于多阶近邻融合的不完整多视图聚类算法[J]. 软件学报, 2022, 33(4):1354-1372. LIU X L, BAI L, ZHAO X W, et al. Incomplete multi-view clustering algorithm based on multi-order neighborhood fusion[J]. Journal of Software, 2022, 33(4):1354-1372. (in Chinese)
[3] LIN Z P, KANG Z. Graph filter-based multi-view attributed graph clustering[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence. Montreal, Canada:IJCAI.org, 2021:2723-2729.
[4] PAN E L, KANG Z. Multi-view contrastive graph clustering[C]//Proceedings of the 35th Conference on Neural Information Processing Systems. Cambridge, USA:MIT Press, 2021:2148-2159.
[5] LIN Z P, KANG Z, ZHANG L Z, et al. Multi-view attributed Graph clustering[J]. IEEE Transactions on Knowledge and Data Engineering, 2023, 35(2):1872-1880.
[6] FAN S H, WANG X, SHI C, et al. One2Multi graph autoencoder for multi-view graph clustering[C]//Proceedings of the Web Conference 2020. Taipei, China:ACM, 2020:3070-3076.
[7] CAI E C, HUANG J, HUANG B S, et al. GRAE:Graph recurrent autoencoder for multi-view graph clustering[C]//Proceedings of the 4th International Conference on Algorithms, Computing and Artificial Intelligence. Sanya, China:ACM, 2021:72.
[8] LIANG J Y, LIU X L, BAI L, et al. Incomplete multi-view clustering via local and global co-regularization[J]. Science China Information Sciences, 2022, 65(5):152105.
[9] CHUNG F R K. Spectral graph theory[M]. Providence:American Mathematical Society, 1997.
[10] WU D Y, XU J, DONG X, et al. GSPL:A succinct kernel model for group-sparse projections learning of multiview data[C]//Proceedings of the 30th International Joint Conference on Artificial Intelligence. San Francisco, USA:Morgan Kaufmann, 2021:3185-3191.
[11] LI R H, ZHANG C Q, HU Q H, et al. Flexible multi-view representation learning for subspace clustering[C]//Proceedings of the 28th International Joint Conference on Artificial Intelligence. Macao, China:AAAI Press, 2019:2916-2922.
[12] NIE F P, LI J, LI X L. Self-weighted multiview clustering with multiple graphs[C]//Proceedings of the 26th International Joint Conference on Artificial Intelligence. Melbourne, Australia:AAAI Press, 2017:2564-2570.
[13] XIA W, WANG S, YANG M, et al. Multi-view graph embedding clustering network:Joint self-supervision and block diagonal representation[J]. Neural Networks, 2022, 145:1-9.
[14] CHENG J F, WANG Q Q, TAO Z Q, et al. Multi-view attribute graph convolution networks for clustering[C]//Proceedings of the 29th International Joint Conference on Artificial Intelligence. Yokohama, Japan:IJCAI.org, 2021:411.
[15] VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]//Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, USA:Curran Associates Inc., 2017:6000-6010.
[16] DEVLIN J, CHANG M W, LEE K, et al. BERT:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies, Volume 1(Long and Short Papers). Minneapolis, USA:ACL, 2019.
[17] KITAEV N, KAISER L, LEVSKAYA A. Reformer:The efficient transformer[C]//Proceedings of the 8th International Conference on Learning Representations. Addis Ababa, Ethiopia:OpenReview.net, 2020.
[18] SHUMAN D I, NARANG S K, FROSSARD P, et al. The emerging field of signal processing on graphs:Extending high-dimensional data analysis to networks and other irregular domains[J]. IEEE Signal Processing Magazine, 2013, 30(3):83-98.
[19] WANG J, LIANG J Y, YAO K X, et al. Graph convolutional autoencoders with co-learning of graph structure and node attributes[J]. Pattern Recognition, 2022, 121:108215.
[20] KIPF T N, Welling M. Variational graph auto-encoders[EB/OL].[2016-01-01]. https://arxiv.org/abs/1611.07308.
[21] TANG J, QU M, WANG M Z, et al. LINE:Large-scale information network embedding[C]//Proceedings of the 24th International Conference on World Wide Web. Florence, Italy:International World Wide Web Conferences Steering Committee, 2015:1067-1077.
[22] LIU W Y, CHEN P Y, YEUNG S, et al. Principled multilayer network embedding[C]//Proceedings of the 2017 International Conference on Data Mining Workshops. New Orleans, USA:IEEE Press, 2017:134-141.
[23] XIA R K, PAN Y, DU L, et al. Robust multi-view spectral clustering via low-rank and sparse decomposition[C]//Proceedings of the 28th AAAI Conference on Artificial Intelligence. Québec City, Canada:AAAI Press, 2014:2149-2155.
[24] FETTAL C, LABIOD L, NADIF M. Simultaneous linear multi-view attributed graph representation learning and clustering[C]//Proceedings of the 60th ACM International Conference on Web Search and Data Mining. Singapore, Singapore:ACM, 2023:303-311.
[1] ZHANG Xueqin, LIU Gang, WANG Zhineng, LUO Fei, WU Jianhua. Microscopic diffusion prediction based on multifeature fusion and deep learning[J]. Journal of Tsinghua University(Science and Technology), 2024, 64(4): 688-699.
[2] ZHANG Mingfang, LI Guilin, WU Chuna, WANG Li, TONG Lianghao. Estimation algorithm of driver's gaze zone based on lightweight spatial feature encoding network[J]. Journal of Tsinghua University(Science and Technology), 2024, 64(1): 44-54.
[3] ZHANG Yang, JIANG Minghu. Authorship identification method based on the embedding of the syntax tree node[J]. Journal of Tsinghua University(Science and Technology), 2023, 63(9): 1390-1398.
[4] HUANG Ben, KANG Fei, TANG Yu. A real-time detection method for concrete dam cracks based on an object detection algorithm[J]. Journal of Tsinghua University(Science and Technology), 2023, 63(7): 1078-1086.
[5] ZHOU Xun, LI Yonglong, ZHOU Yingyue, WANG Haoran, LI Jiayang, ZHAO Jiaqi. Dam surface crack detection method based on improved DeepLabV3+ network[J]. Journal of Tsinghua University(Science and Technology), 2023, 63(7): 1153-1163.
[6] YANG Hongyu, ZHANG Zixin, ZHANG Liang. Network security situation assessments with parallel feature extraction and an improved BiGRU[J]. Journal of Tsinghua University(Science and Technology), 2022, 62(5): 842-848.
[7] LI Mingyang, KONG Fang. Combined self-attention mechanism for named entity recognition in social media[J]. Journal of Tsinghua University(Science and Technology), 2019, 59(6): 461-467.
[8] ZHANG Yu, ZHANG Pengyuan, YAN Yonghong. Long short-term memory with attention and multitask learning for distant speech recognition[J]. Journal of Tsinghua University(Science and Technology), 2018, 58(3): 249-253.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd