2020
DOI: 10.1109/tsp.2020.2983150
|View full text |Cite
|
Sign up to set email alerts
|

Majorize–Minimize Adapted Metropolis–Hastings Algorithm

Abstract: The dimension and the complexity of inference problems have dramatically increased in statistical signal processing. It thus becomes mandatory to design improved proposal schemes in Metropolis-Hastings algorithms, providing large proposal transitions that are accepted with high probability. The proposal density should ideally provide an accurate approximation to the target density with a low computational cost. In this paper, we derive a novel Metropolis-Hastings proposal, inspired from Langevin dynamics, wher… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7
2

Relationship

4
5

Authors

Journals

citations
Cited by 19 publications
(14 citation statements)
references
References 60 publications
(94 reference statements)
0
14
0
Order By: Relevance
“…In [44], the authors propose a TunaMH method based on MH (as its name evokes) for exposing a tunable tradeoff between its batch size and the convergence rate that is theoretically guaranteed. The authors of [45] propose a novel MH to provide large proposal transitions accepted with a high probability due to a significant increase in the complexity and in the dimension of the interference problems. The authors of [46] address the incipient fault detection problem, which is solved by the proposed two-step technique.…”
Section: Literature Reviewmentioning
confidence: 99%
“…In [44], the authors propose a TunaMH method based on MH (as its name evokes) for exposing a tunable tradeoff between its batch size and the convergence rate that is theoretically guaranteed. The authors of [45] propose a novel MH to provide large proposal transitions accepted with a high probability due to a significant increase in the complexity and in the dimension of the interference problems. The authors of [46] address the incipient fault detection problem, which is solved by the proposed two-step technique.…”
Section: Literature Reviewmentioning
confidence: 99%
“…where A(μ (t+1) n ) is an SDP matrix of R dx×dx . The scaled gradient term in (7) can be understood as a discretization of (5) under the assumption of a locally constant curvature [19]. Like in the ULA scheme (6), the covariance matrix of the proposal density is adapted according to…”
Section: Proposed Algorithmmentioning
confidence: 99%
“…The advantage is that a sample arising from the proposal is more likely drawn from a highly probable region, which is beneficial to the acceptance rate. The performance of MALA can be improved by introducing in the drift term a scaling matrix depending on the current sample value, in order to adapt the proposal to the local structure of the target density [17,18,19]. Several strategies have been investigated for the construction of the scaling matrix in MALA, relying on second-order information [20,18], Fisher metric [17] or majorization-minimization strategy [19].…”
Section: Introductionmentioning
confidence: 99%
“…Let γ > 0 and let Q ∈ S n be a preconditioning matrix used to accelerate the sampler [22]. Following [15], π(x) is approximated by…”
Section: Preconditioned P-ula a Notationmentioning
confidence: 99%