2023
DOI: 10.1109/twc.2022.3221057
|View full text |Cite
|
Sign up to set email alerts
|

Annealed Langevin Dynamics for Massive MIMO Detection

Abstract: We propose a solution for linear inverse problems based on higher-order Langevin diffusion. More precisely, we propose pre-conditioned second-order and third-order Langevin dynamics that provably sample from the posterior distribution of our unknown variables of interest while being computationally more efficient than their first-order counterpart and the nonconditioned versions of both dynamics. Moreover, we prove that both pre-conditioned dynamics are well-defined and have the same unique invariant distribut… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 64 publications
0
6
0
Order By: Relevance
“…After that, it follows the score function of the joint posterior density of perturbed variables, starting with high σ 1,X and σ 1,H , progressively reducing to σ L,X ≈ σ L,H ≈ 0. At early noise levels, the likelihood term directs the dynamics toward an estimate mainly driven by the measurements, while in later noise levels, the prior refines the estimate, as explained further in [13]. Annealing benefits are threefold: it is used to train the score network via score-matching, it enhances dynamic mixing, and it allows for discrete-to-continuous variable approximation.…”
Section: Joint Diffusion Posterior Samplingmentioning
confidence: 99%
See 3 more Smart Citations
“…After that, it follows the score function of the joint posterior density of perturbed variables, starting with high σ 1,X and σ 1,H , progressively reducing to σ L,X ≈ σ L,H ≈ 0. At early noise levels, the likelihood term directs the dynamics toward an estimate mainly driven by the measurements, while in later noise levels, the prior refines the estimate, as explained further in [13]. Annealing benefits are threefold: it is used to train the score network via score-matching, it enhances dynamic mixing, and it allows for discrete-to-continuous variable approximation.…”
Section: Joint Diffusion Posterior Samplingmentioning
confidence: 99%
“…Intuitively, the approximation in (10) assumes that the annealing and measurement noise are independent. Although this approximation entails a worse symbol error rate (SER) than the one introduced in [13], it provides a good approximation that allows one to reduce the computational burden of the algorithm.…”
Section: Joint Diffusion Posterior Samplingmentioning
confidence: 99%
See 2 more Smart Citations
“…Algorithm unfolding decouples the update steps of an iterative algorithm to create a cascade of hybrid layers that preserve the original update structure but introduces one or more learnable parameters from data. This form of domain-inspired learning has been extremely popular and effective in several application areas, including but not limited to non-negative matrix factorization [29], iterative soft thresholding [30], semantic image segmentation [31], blind deblurring [32], clutter suppression [33], particle filtering [34], symbol detection [35], link scheduling [36], energy-aware power allocation [37], and beamforming in wireless networks [21]- [23]. These algorithms use various neural layers to learn one or more parameters of the iterative algorithm being unfolded or to approximate certain computational steps in the algorithm to reduce complexity and speed-up processing.…”
Section: Introductionmentioning
confidence: 99%