Please wait a minute...
 首页  期刊介绍 期刊订阅 联系我们 横山亮次奖 百年刊庆
 
最新录用  |  预出版  |  当期目录  |  过刊浏览  |  阅读排行  |  下载排行  |  引用排行  |  横山亮次奖  |  百年刊庆
清华大学学报(自然科学版)  2024, Vol. 64 Issue (4): 700-711    DOI: 10.16511/j.cnki.qhdxxb.2024.22.002
  信息科学与技术 本期目录 | 过刊浏览 | 高级检索 |
自适应广义酉变换近似消息传递算法
雷旭鹏1, 杨健2, 徐孟怀1, 朱江1, 龚旻2
1. 浙江大学 海洋学院, 舟山 316021;
2. 中国运载火箭技术研究院, 北京 100076
Adaptive damping for a generalized unitary approximate message passing algorithm
LEI Xupeng1, YANG Jian2, XU Menghuai1, ZHU Jiang1, GONG Min2
1. Ocean College, Zhejiang University, Zhoushan 316021, China;
2. China Academy of Launch Vehicle Technology, Beijing 100076, China
全文: PDF(7542 KB)   HTML
输出: BibTeX | EndNote (RIS)      
摘要 信号/参数经过线性变换, 再经过逐位非线性变换得到测量值的过程可以抽象为广义线性模型。 广义近似消息传递算法是处理广义线性模型的一种Bayes方法, 通过引入信号的稀疏先验分布, 利用似然函数和先验分布得到后验均值和后验方差。 然而, 当测量矩阵的元素不服从次Gauss分布时, 广义近似消息传递算法性能会急剧恶化。 通过奇异值分解, 广义酉变换近似消息传递算法消除了测量矩阵的相关性, 在包括相关测量矩阵的各类测量矩阵中表现出更强的鲁棒性。 然而, 经过足够多次迭代后, 广义酉变换近似消息传递算法的信号重构误差在平衡点附近振荡; 且随着测量矩阵相关性的增加, 广义酉变换近似消息传递算法性能开始恶化。 为了进一步提高广义酉变换近似消息传递算法的稳健性、 改善算法准确性, 该文提出自适应广义酉变换近似消息传递算法。 该算法通过构造目标函数并自适应选择合适的步长, 使得广义酉变换近似消息传递算法能够收敛到平衡点, 从而获得更好的性能。 大量的数值仿真实验结果验证了自适应广义酉变换近似消息传递算法的有效性。
服务
把本文推荐给朋友
加入引用管理器
E-mail Alert
RSS
作者相关文章
雷旭鹏
杨健
徐孟怀
朱江
龚旻
关键词 广义线性模型压缩感知酉变换近似消息传递自适应算法    
Abstract:[Objective] A model involving an unknown signal/parameter undergoing a linear transformation followed by a componentwise nonlinear transformation is known as a generalized linear model (GLM). Estimating an unknown signal/parameter from nonlinear measurements is a fundamental problem in radar and communication fields, including applications such as one-bit radar, one-bit multiple-input multiple-output communication, and phase retrieval. The generalized approximate message passing (GAMP) algorithm is an efficient Bayesian inference technique that deals with GLM. GAMP has low computational complexity, excellent reconstruction performance, and the ability to automatically estimate noise variance and nuisance parameters. However, when the elements of the measurement matrix deviate from the sub-Gaussian distribution, the performance of GAMP considerably degrades. To address this issue, the generalized vector approximate message passing (GVAMP) algorithm is proposed, which employs the vector factor graph representation and expectation propagation to achieve good performance across a broader ensemble of measurement matrices. Moreover, the generalized unitary approximate message passing (GUAMP) algorithm, which employs the singular value decomposition technique for eliminating correlation within the measurement matrix, is introduced. GUAMP demonstrates increased robustness compared to GAMP and GVAMP, particularly under scenarios involving the correlated measurement matrix. However, the signal estimation error of GUAMP may exhibit fluctuations even after a sufficient number of iterations. In addition, as the correlation of the measurement matrix exceeds a threshold, the performance of GUAMP deteriorates compared to the adaptive GAMP (AD-GAMP) algorithm. Therefore, proposing a method to further enhance the robustness and performance of GUAMP is imperative. [Methods] This paper proposes an adaptive GUAMP (AD-GUAMP) algorithm. AD-GUAMP incorporates stepsize selection rules for the approximate message passing (AMP) and GAMP modules of GUAMP, enabling AMP and GAMP algorithms to converge to their stationary points and achieve improved performance. The details of the AD-GUAMP are described. The objective functions designed for the two modules are introduced. The stepsize increases provided that the objective function value continues to increase, indicating that the AMP and GAMP modules perform well and increasing the stepsize accelerates the algorithm to converge. Otherwise, the stepsize decreases, slowing down the GUAMP algorithm for convergence. [Results] Extensive numerical experiments are performed, and the results indicate the effectiveness of AD-GUAMP. Results reveal that the performance of AD-GUAMP is almost similar to GVAMP and better than AD-GAMP and GUAMP with a low-ranked or ill-conditioned measurement matrix. For the correlated measurement matrix, AD-GUAMP performs better than AD-GAMP, GUAMP, and GVAMP. [Conclusions] The performance of AD-GUAMP is improved with adaptive stepsize selection rules. Therefore, AD-GUAMP can be used in more challenging measurement matrix scenarios compared to AD-GAMP, GUAMP, and GVAMP.
Key wordsgeneralized linear model    compressed sensing    unitary approximate message passing    adaptive algorithm
收稿日期: 2023-07-21      出版日期: 2024-03-27
基金资助:国家自然科学基金项目(62371420, 61901415); 浙江省自然科学基金项目(LY22F010009)
通讯作者: 朱江,副教授,E-mail:jiangzhu16@zju.edu.cn     E-mail: jiangzhu16@zju.edu.cn
作者简介: 雷旭鹏(2000—),男,硕士研究生。
引用本文:   
雷旭鹏, 杨健, 徐孟怀, 朱江, 龚旻. 自适应广义酉变换近似消息传递算法[J]. 清华大学学报(自然科学版), 2024, 64(4): 700-711.
LEI Xupeng, YANG Jian, XU Menghuai, ZHU Jiang, GONG Min. Adaptive damping for a generalized unitary approximate message passing algorithm. Journal of Tsinghua University(Science and Technology), 2024, 64(4): 700-711.
链接本文:  
http://jst.tsinghuajournals.com/CN/10.16511/j.cnki.qhdxxb.2024.22.002  或          http://jst.tsinghuajournals.com/CN/Y2024/V64/I4/700
  
  
  
  
  
  
  
[1] XIONG Y Z, WEI N, ZHANG Z P. A low-complexity iterative GAMP-based detection for massive MIMO with low-resolution ADCs[C]//Proceedings of 2017 IEEE Wireless Communications and Networking Conference (WCNC). San Francisco, USA, 2017: 1-6.
[2] RANGAN S. Generalized approximate message passing for estimation with random linear mixing[C]//Proceedings of 2011 IEEE International Symposium on Information Theory Proceedings. Saint Petersburg, Russia, 2011: 2168-2172.
[3] TIPPING M E. Sparse Bayesian learning and the relevance vector machine[J]. The Journal of Machine Learning Research, 2001, 1: 211-244.
[4] WIPF D P, RAO B D. Sparse Bayesian learning for basis selection[J]. IEEE Transactions on Signal Processing, 2004, 52(8): 2153-2164.
[5] RANGAN S, SCHNITER P, FLETCHER A K, et al. On the convergence of approximate message passing with arbitrary matrices[J]. IEEE Transactions on Information Theory, 2019, 65(9): 5339-5351.
[6] SCHNITER P, RANGAN S. Compressive phase retrieval via generalized approximate message passing[J]. IEEE Transactions on Signal Processing, 2015, 63(4): 1043-1055.
[7] VILA J, SCHNITER P, RANGAN S, et al. Adaptive damping and mean removal for the generalized approximate message passing algorithm[C]//Proceedings of 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). South Brisbane, Australia, 2015: 2021-2025.
[8] SCHNITER P, RANGAN S, FLETCHER A K. Vector approximate message passing for the generalized linear model[C]//Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA, 2016: 1525-1529.
[9] RANGAN S, SCHNITER P, FLETCHER A K. Vector approximate message passing[J]. IEEE Transactions on Information Theory, 2019, 65(10): 6664-6684.
[10] RUAN C Y, ZHANG Z C, JIANG H, et al. Vector approximate message passing with sparse Bayesian learning for Gaussian mixture prior[J]. China Communications, 2023, 20(5): 57-69.
[11] MA J J, PING L. Orthogonal AMP[J]. IEEE Access, 2017, 5: 2020-2033.
[12] GUO Q H, XI J T. Approximate message passing with unitary transformation[Z/OL]. (2015-04-19)[2023-06-27]. https://doi.org/10.48550/arXiv.1504.04799.
[13] LUO M, GUO Q H, JIN M, et al. Unitary approximate message passing for sparse Bayesian learning[J]. IEEE Transactions on Signal Processing, 2021, 69: 6023-6039.
[14] YUAN Z D, GUO Q H, LUO M. Approximate message passing with unitary transformation for robust bilinear recovery[J]. IEEE Transactions on Signal Processing, 2021, 69: 617-630.
[15] ZHU J, MENG X M, LEI X P, et al. A unitary transform based generalized approximate message passing[C]//Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Rhodes Island, Greece, 2023: 1-5.
[16] MENG X M, WU S, ZHU J. A unified Bayesian inference framework for generalized linear models[J]. IEEE Signal Processing Letters, 2018, 25(3): 398-402.
[17] ZHU J. A comment on the “A unified Bayesian inference framework for generalized linear models”[Z/OL]. (2019-04-09)[2023-07-14]. https://doi.org/10.48550/arXiv.1904.04485.
[18] MINKA T P. A family of algorithms for approximate Bayesian inference[D]. Cambridge, USA: Massachusetts Institute of Technology, 2001.
[19] SEEGER M. Expectation propagation for exponential families[R/OL]. 2005. https://core.ac.uk/download/pdf/147968132.pdf.
[20] SHIU D S, FOSCHINI G J, GANS M J, et al. Fading correlation and its effect on the capacity of multielement antenna systems[J]. IEEE Transactions on Communications, 2000, 48(3): 502-513.
[1] 石保顺, 刘政, 刘柯讯. 基于可训练对偶标架的模型驱动并行压缩感知磁共振成像算法及其收敛性分析[J]. 清华大学学报(自然科学版), 2024, 64(4): 712-723.
[2] 王文峰, 陈曦, 王海洋, 彭伟, 钱静, 郑宏伟. 基于局部压缩感知的行为识别[J]. 清华大学学报(自然科学版), 2018, 58(6): 581-586.
[3] 袁小虎, 吴热冰, 李春文. 基于分布式压缩感知的量子过程层析[J]. 清华大学学报(自然科学版), 2017, 57(10): 1089-1095.
[4] 李振强, 黄振, 陈曦, 葛宁. 微纳卫星编队的欠采样传输无源定位方法[J]. 清华大学学报(自然科学版), 2016, 56(6): 650-655.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
版权所有 © 《清华大学学报(自然科学版)》编辑部
本系统由北京玛格泰克科技发展有限公司设计开发 技术支持:support@magtech.com.cn