信号/参数经过线性变换, 再经过逐位非线性变换得到测量值的过程可以抽象为广义线性模型。 广义近似消息传递算法是处理广义线性模型的一种Bayes方法, 通过引入信号的稀疏先验分布, 利用似然函数和先验分布得到后验均值和后验方差。 然而, 当测量矩阵的元素不服从次Gauss分布时, 广义近似消息传递算法性能会急剧恶化。 通过奇异值分解, 广义酉变换近似消息传递算法消除了测量矩阵的相关性, 在包括相关测量矩阵的各类测量矩阵中表现出更强的鲁棒性。 然而, 经过足够多次迭代后, 广义酉变换近似消息传递算法的信号重构误差在平衡点附近振荡; 且随着测量矩阵相关性的增加, 广义酉变换近似消息传递算法性能开始恶化。 为了进一步提高广义酉变换近似消息传递算法的稳健性、 改善算法准确性, 该文提出自适应广义酉变换近似消息传递算法。 该算法通过构造目标函数并自适应选择合适的步长, 使得广义酉变换近似消息传递算法能够收敛到平衡点, 从而获得更好的性能。 大量的数值仿真实验结果验证了自适应广义酉变换近似消息传递算法的有效性。
[Objective] A model involving an unknown signal/parameter undergoing a linear transformation followed by a componentwise nonlinear transformation is known as a generalized linear model (GLM). Estimating an unknown signal/parameter from nonlinear measurements is a fundamental problem in radar and communication fields, including applications such as one-bit radar, one-bit multiple-input multiple-output communication, and phase retrieval. The generalized approximate message passing (GAMP) algorithm is an efficient Bayesian inference technique that deals with GLM. GAMP has low computational complexity, excellent reconstruction performance, and the ability to automatically estimate noise variance and nuisance parameters. However, when the elements of the measurement matrix deviate from the sub-Gaussian distribution, the performance of GAMP considerably degrades. To address this issue, the generalized vector approximate message passing (GVAMP) algorithm is proposed, which employs the vector factor graph representation and expectation propagation to achieve good performance across a broader ensemble of measurement matrices. Moreover, the generalized unitary approximate message passing (GUAMP) algorithm, which employs the singular value decomposition technique for eliminating correlation within the measurement matrix, is introduced. GUAMP demonstrates increased robustness compared to GAMP and GVAMP, particularly under scenarios involving the correlated measurement matrix. However, the signal estimation error of GUAMP may exhibit fluctuations even after a sufficient number of iterations. In addition, as the correlation of the measurement matrix exceeds a threshold, the performance of GUAMP deteriorates compared to the adaptive GAMP (AD-GAMP) algorithm. Therefore, proposing a method to further enhance the robustness and performance of GUAMP is imperative. [Methods] This paper proposes an adaptive GUAMP (AD-GUAMP) algorithm. AD-GUAMP incorporates stepsize selection rules for the approximate message passing (AMP) and GAMP modules of GUAMP, enabling AMP and GAMP algorithms to converge to their stationary points and achieve improved performance. The details of the AD-GUAMP are described. The objective functions designed for the two modules are introduced. The stepsize increases provided that the objective function value continues to increase, indicating that the AMP and GAMP modules perform well and increasing the stepsize accelerates the algorithm to converge. Otherwise, the stepsize decreases, slowing down the GUAMP algorithm for convergence. [Results] Extensive numerical experiments are performed, and the results indicate the effectiveness of AD-GUAMP. Results reveal that the performance of AD-GUAMP is almost similar to GVAMP and better than AD-GAMP and GUAMP with a low-ranked or ill-conditioned measurement matrix. For the correlated measurement matrix, AD-GUAMP performs better than AD-GAMP, GUAMP, and GVAMP. [Conclusions] The performance of AD-GUAMP is improved with adaptive stepsize selection rules. Therefore, AD-GUAMP can be used in more challenging measurement matrix scenarios compared to AD-GAMP, GUAMP, and GVAMP.
[1] XIONG Y Z, WEI N, ZHANG Z P. A low-complexity iterative GAMP-based detection for massive MIMO with low-resolution ADCs[C]//Proceedings of 2017 IEEE Wireless Communications and Networking Conference (WCNC). San Francisco, USA, 2017: 1-6.
[2] RANGAN S. Generalized approximate message passing for estimation with random linear mixing[C]//Proceedings of 2011 IEEE International Symposium on Information Theory Proceedings. Saint Petersburg, Russia, 2011: 2168-2172.
[3] TIPPING M E. Sparse Bayesian learning and the relevance vector machine[J]. The Journal of Machine Learning Research, 2001, 1: 211-244.
[4] WIPF D P, RAO B D. Sparse Bayesian learning for basis selection[J]. IEEE Transactions on Signal Processing, 2004, 52(8): 2153-2164.
[5] RANGAN S, SCHNITER P, FLETCHER A K, et al. On the convergence of approximate message passing with arbitrary matrices[J]. IEEE Transactions on Information Theory, 2019, 65(9): 5339-5351.
[6] SCHNITER P, RANGAN S. Compressive phase retrieval via generalized approximate message passing[J]. IEEE Transactions on Signal Processing, 2015, 63(4): 1043-1055.
[7] VILA J, SCHNITER P, RANGAN S, et al. Adaptive damping and mean removal for the generalized approximate message passing algorithm[C]//Proceedings of 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). South Brisbane, Australia, 2015: 2021-2025.
[8] SCHNITER P, RANGAN S, FLETCHER A K. Vector approximate message passing for the generalized linear model[C]//Proceedings of the 50th Asilomar Conference on Signals, Systems and Computers. Pacific Grove, USA, 2016: 1525-1529.
[9] RANGAN S, SCHNITER P, FLETCHER A K. Vector approximate message passing[J]. IEEE Transactions on Information Theory, 2019, 65(10): 6664-6684.
[10] RUAN C Y, ZHANG Z C, JIANG H, et al. Vector approximate message passing with sparse Bayesian learning for Gaussian mixture prior[J]. China Communications, 2023, 20(5): 57-69.
[11] MA J J, PING L. Orthogonal AMP[J]. IEEE Access, 2017, 5: 2020-2033.
[12] GUO Q H, XI J T. Approximate message passing with unitary transformation[Z/OL]. (2015-04-19)[2023-06-27]. https://doi.org/10.48550/arXiv.1504.04799.
[13] LUO M, GUO Q H, JIN M, et al. Unitary approximate message passing for sparse Bayesian learning[J]. IEEE Transactions on Signal Processing, 2021, 69: 6023-6039.
[14] YUAN Z D, GUO Q H, LUO M. Approximate message passing with unitary transformation for robust bilinear recovery[J]. IEEE Transactions on Signal Processing, 2021, 69: 617-630.
[15] ZHU J, MENG X M, LEI X P, et al. A unitary transform based generalized approximate message passing[C]//Proceedings of 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Rhodes Island, Greece, 2023: 1-5.
[16] MENG X M, WU S, ZHU J. A unified Bayesian inference framework for generalized linear models[J]. IEEE Signal Processing Letters, 2018, 25(3): 398-402.
[17] ZHU J. A comment on the “A unified Bayesian inference framework for generalized linear models”[Z/OL]. (2019-04-09)[2023-07-14]. https://doi.org/10.48550/arXiv.1904.04485.
[18] MINKA T P. A family of algorithms for approximate Bayesian inference[D]. Cambridge, USA: Massachusetts Institute of Technology, 2001.
[19] SEEGER M. Expectation propagation for exponential families[R/OL]. 2005. https://core.ac.uk/download/pdf/147968132.pdf.
[20] SHIU D S, FOSCHINI G J, GANS M J, et al. Fading correlation and its effect on the capacity of multielement antenna systems[J]. IEEE Transactions on Communications, 2000, 48(3): 502-513.