2019
DOI: 10.1137/18m121527x
|View full text |Cite
|
Sign up to set email alerts
|

Multiway Monte Carlo Method for Linear Systems

Abstract: We study the Monte Carlo method for solving a linear system of the form x = Hx + b. A sufficient condition for the method to work is H < 1, which greatly limits the usability of this method. We improve this condition by proposing a new multi-way Markov random walk, which is a generalization of the standard Markov random walk. Under our new framework we prove that the necessary and sufficient condition for our method to work is the spectral radius ρ(H + ) < 1, which is a weaker requirement than H < 1. In additi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…where N O is the number of out-of-distribution samples drawn from P O (x, y). Based on Monte Carlo [32], we apply the stochastic gradient descent optimization algorithm [1] to estimate the gradients of Eq. ( 13) and Eq.…”
Section: Generic Empirical Riskmentioning
confidence: 99%
“…where N O is the number of out-of-distribution samples drawn from P O (x, y). Based on Monte Carlo [32], we apply the stochastic gradient descent optimization algorithm [1] to estimate the gradients of Eq. ( 13) and Eq.…”
Section: Generic Empirical Riskmentioning
confidence: 99%
“…The parameter φ is fixed for learning the parameter θ in f θ because g φ is a pretrained network. Based on Monte Carlo [36], we apply the stochastic gradient descent optimization algorithm [37] to estimate the gradient of Eq. ( 13), where the batch size is B.…”
Section: Learning the Auxiliary Networkmentioning
confidence: 99%
“…According to the idea of Monte Carlo [47], we apply the stochastic gradient descent (SGD) [43] optimization algorithm to estimate the gradient of the objective function Eq. (15).…”
Section: Confidence Penalty On Out-of-distribution Samplesmentioning
confidence: 99%