Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  百年期刊
Journal of Tsinghua University(Science and Technology)    2024, Vol. 64 Issue (4) : 700-711     DOI: 10.16511/j.cnki.qhdxxb.2024.22.002
INFORMATION SCIENCE AND TECHNOLOGY |
Adaptive damping for a generalized unitary approximate message passing algorithm
LEI Xupeng1, YANG Jian2, XU Menghuai1, ZHU Jiang1, GONG Min2
1. Ocean College, Zhejiang University, Zhoushan 316021, China;
2. China Academy of Launch Vehicle Technology, Beijing 100076, China
Download: PDF(7542 KB)   HTML
Export: BibTeX | EndNote | Reference Manager | ProCite | RefWorks    
Abstract  [Objective] A model involving an unknown signal/parameter undergoing a linear transformation followed by a componentwise nonlinear transformation is known as a generalized linear model (GLM). Estimating an unknown signal/parameter from nonlinear measurements is a fundamental problem in radar and communication fields, including applications such as one-bit radar, one-bit multiple-input multiple-output communication, and phase retrieval. The generalized approximate message passing (GAMP) algorithm is an efficient Bayesian inference technique that deals with GLM. GAMP has low computational complexity, excellent reconstruction performance, and the ability to automatically estimate noise variance and nuisance parameters. However, when the elements of the measurement matrix deviate from the sub-Gaussian distribution, the performance of GAMP considerably degrades. To address this issue, the generalized vector approximate message passing (GVAMP) algorithm is proposed, which employs the vector factor graph representation and expectation propagation to achieve good performance across a broader ensemble of measurement matrices. Moreover, the generalized unitary approximate message passing (GUAMP) algorithm, which employs the singular value decomposition technique for eliminating correlation within the measurement matrix, is introduced. GUAMP demonstrates increased robustness compared to GAMP and GVAMP, particularly under scenarios involving the correlated measurement matrix. However, the signal estimation error of GUAMP may exhibit fluctuations even after a sufficient number of iterations. In addition, as the correlation of the measurement matrix exceeds a threshold, the performance of GUAMP deteriorates compared to the adaptive GAMP (AD-GAMP) algorithm. Therefore, proposing a method to further enhance the robustness and performance of GUAMP is imperative. [Methods] This paper proposes an adaptive GUAMP (AD-GUAMP) algorithm. AD-GUAMP incorporates stepsize selection rules for the approximate message passing (AMP) and GAMP modules of GUAMP, enabling AMP and GAMP algorithms to converge to their stationary points and achieve improved performance. The details of the AD-GUAMP are described. The objective functions designed for the two modules are introduced. The stepsize increases provided that the objective function value continues to increase, indicating that the AMP and GAMP modules perform well and increasing the stepsize accelerates the algorithm to converge. Otherwise, the stepsize decreases, slowing down the GUAMP algorithm for convergence. [Results] Extensive numerical experiments are performed, and the results indicate the effectiveness of AD-GUAMP. Results reveal that the performance of AD-GUAMP is almost similar to GVAMP and better than AD-GAMP and GUAMP with a low-ranked or ill-conditioned measurement matrix. For the correlated measurement matrix, AD-GUAMP performs better than AD-GAMP, GUAMP, and GVAMP. [Conclusions] The performance of AD-GUAMP is improved with adaptive stepsize selection rules. Therefore, AD-GUAMP can be used in more challenging measurement matrix scenarios compared to AD-GAMP, GUAMP, and GVAMP.
Keywords generalized linear model      compressed sensing      unitary approximate message passing      adaptive algorithm     
Issue Date: 27 March 2024
Service
E-mail this article
E-mail Alert
RSS
Articles by authors
LEI Xupeng
YANG Jian
XU Menghuai
ZHU Jiang
GONG Min
Cite this article:   
LEI Xupeng,YANG Jian,XU Menghuai, et al. Adaptive damping for a generalized unitary approximate message passing algorithm[J]. Journal of Tsinghua University(Science and Technology), 2024, 64(4): 700-711.
URL:  
http://jst.tsinghuajournals.com/EN/10.16511/j.cnki.qhdxxb.2024.22.002     OR     http://jst.tsinghuajournals.com/EN/Y2024/V64/I4/700
  
  
  
  
  
  
  
[1] XIONG Y Z, WEI N, ZHANG Z P. A low-complexity iterative GAMP-based detection for massive MIMO with low-resolution ADCs[C]//Proceedings of 2017 IEEE Wireless Communications and Networking Conference (WCNC). San Francisco, USA, 2017: 1-6.
[2] RANGAN S. Generalized approximate message passing for estimation with random linear mixing[C]//Proceedings of 2011 IEEE International Symposium on Information Theory Proceedings. Saint Petersburg, Russia, 2011: 2168-2172.
[3] TIPPING M E. Sparse Bayesian learning and the relevance vector machine[J]. The Journal of Machine Learning Research, 2001, 1: 211-244.
[4] WIPF D P, RAO B D. Sparse Bayesian learning for basis selection[J]. IEEE Transactions on Signal Processing, 2004, 52(8): 2153-2164.
[5] RANGAN S, SCHNITER P, FLETCHER A K, et al. On the convergence of approximate message passing with arbitrary matrices[J]. IEEE Transactions on Information Theory, 2019, 65(9): 5339-5351.
[6] SCHNITER P, RANGAN S. Compressive phase retrieval via generalized approximate message passing[J]. IEEE Transactions on Signal Processing, 2015, 63(4): 1043-1055.
[7] VILA J, SCHNITER P, RANGAN S, et al. Adaptive damping and mean removal for the generalized approximate message passing algorithm[C]//Proceedings of 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). South Brisbane, Australia, 2015: 2021-2025.
[8] SCHNITER P, RANGAN S, FLETCHER A K. Vector approximate message passing for the generalized linear model[C]//Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA, 2016: 1525-1529.
[9] RANGAN S, SCHNITER P, FLETCHER A K. Vector approximate message passing[J]. IEEE Transactions on Information Theory, 2019, 65(10): 6664-6684.
[10] RUAN C Y, ZHANG Z C, JIANG H, et al. Vector approximate message passing with sparse Bayesian learning for Gaussian mixture prior[J]. China Communications, 2023, 20(5): 57-69.
[11] MA J J, PING L. Orthogonal AMP[J]. IEEE Access, 2017, 5: 2020-2033.
[12] GUO Q H, XI J T. Approximate message passing with unitary transformation[Z/OL]. (2015-04-19)[2023-06-27]. https://doi.org/10.48550/arXiv.1504.04799.
[13] LUO M, GUO Q H, JIN M, et al. Unitary approximate message passing for sparse Bayesian learning[J]. IEEE Transactions on Signal Processing, 2021, 69: 6023-6039.
[14] YUAN Z D, GUO Q H, LUO M. Approximate message passing with unitary transformation for robust bilinear recovery[J]. IEEE Transactions on Signal Processing, 2021, 69: 617-630.
[15] ZHU J, MENG X M, LEI X P, et al. A unitary transform based generalized approximate message passing[C]//Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Rhodes Island, Greece, 2023: 1-5.
[16] MENG X M, WU S, ZHU J. A unified Bayesian inference framework for generalized linear models[J]. IEEE Signal Processing Letters, 2018, 25(3): 398-402.
[17] ZHU J. A comment on the “A unified Bayesian inference framework for generalized linear models”[Z/OL]. (2019-04-09)[2023-07-14]. https://doi.org/10.48550/arXiv.1904.04485.
[18] MINKA T P. A family of algorithms for approximate Bayesian inference[D]. Cambridge, USA: Massachusetts Institute of Technology, 2001.
[19] SEEGER M. Expectation propagation for exponential families[R/OL]. 2005. https://core.ac.uk/download/pdf/147968132.pdf.
[20] SHIU D S, FOSCHINI G J, GANS M J, et al. Fading correlation and its effect on the capacity of multielement antenna systems[J]. IEEE Transactions on Communications, 2000, 48(3): 502-513.
[1] YUAN Xiaohu, WU Rebin, LI Chunwen. Quantum process tomography based on distributed compressed sensing[J]. Journal of Tsinghua University(Science and Technology), 2017, 57(10): 1089-1095.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
Copyright © Journal of Tsinghua University(Science and Technology), All Rights Reserved.
Powered by Beijing Magtech Co. Ltd