2012
DOI: 10.2139/ssrn.1724203
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Learning via Simulation: A Marginalized Resample-Move Approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2012
2012
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(36 citation statements)
references
References 36 publications
0
36
0
Order By: Relevance
“…We believe that the TNT algorithm could be adapted to recent SMC algorithms such as [12,13] since they propose advanced SMC samplers based on the IBIS and the E-AIS samplers. Another avenue of research could be an application on change-point stochastic volatility models.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…We believe that the TNT algorithm could be adapted to recent SMC algorithms such as [12,13] since they propose advanced SMC samplers based on the IBIS and the E-AIS samplers. Another avenue of research could be an application on change-point stochastic volatility models.…”
Section: Discussionmentioning
confidence: 99%
“…By doing so, we come back to the framework with static parameter space. For non linear state space model, recent works of [13,14] rely on the particle MCMC framework of [34] for integrating out the state vector. We believe that switching from the tempered domain to the time one as well as employing the evolutionary MCMC kernel presented above could even more increase the efficiency of these sophisticated SMC samplers.…”
Section: Adaptation Of the Scalementioning
confidence: 99%
See 1 more Smart Citation
“…The reader is referred to Chopin et al (2013) for further details including a formal justification (see also Fulop and Li (2013) for a related algorithm and Jacob (2015) for a recent discussion). Recall the target posterior at time t, p(c|y 1:t ) given by (3).…”
Section: Smc 2 Schemementioning
confidence: 99%
“…We also propose various methodological extensions. Related ideas have been proposed more recently in scenarios where a static parameter, θ, is the object inferred by the top level algorithm, rather than the Markov process {X n } n≥1 [Chopin et al, 2011, Fulop andLi, 2011]. …”
Section: Introductionmentioning
confidence: 99%