Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence 2017
DOI: 10.24963/ijcai.2017/74
|View full text |Cite
|
Sign up to set email alerts
|

The Mixing of Markov Chains on Linear Extensions in Practice

Abstract: We investigate almost uniform sampling from the set of linear extensions of a given partial order. The most efficient schemes stem from Markov chains whose mixing time bounds are polynomial, yet impractically large. We show that, on instances one encounters in practice, the actual mixing times can be much smaller than the worst-case bounds, and particularly so for a novel Markov chain we put forward. We circumvent the inherent hardness of estimating standard mixing times by introducing a refined notion, which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 14 publications
0
5
0
Order By: Relevance
“…Telescopic Product. The basic scheme due to Brightwell and Winkler (1991), revisited by Talvitie et al (2017).…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Telescopic Product. The basic scheme due to Brightwell and Winkler (1991), revisited by Talvitie et al (2017).…”
Section: Methodsmentioning
confidence: 99%
“…If the elements are incomparable, then we use the Monte Carlo method with a sufficient number of samples to find out which order between the elements is more probable in the linear extensions of P i , and add P i augmented with that ordering constraint to the end of the sequence, i.e., (a i+1 , b i+1 ) is either (x, y) or (y, x). An analysis shows that in total O( −2 n 2 log 2 n log δ −1 ) samples from the uniform distribution of linear extensions (of the varying posets) suffice for an ( , δ)-approximation of (P ) (Talvitie, Niinimäki, and Koivisto 2017); if the samples are from an almost uniform distribution, then the bound becomes slightly larger. Using Huber's (2006) perfect sampler that generates a random linear extension in O(n 3 log n) expected time, the whole algorithm runs in O( −2 n 5 log 3 n log δ −1 ) expected time.…”
Section: Markov Chain Monte Carlomentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast, exact counting of the completions of a partial order is #P -complete (Brightwell and Winkler 1991). While the worst-case mixing times of the Markov chain are practically prohibitive, recent empirical results have shown that the MCMC approach can be made to scale well in practice (Talvitie, Niinimäki, and Koivisto 2017), and techniques for improving the mixing time are an active area of research (Talvitie et al 2018a;2018b).…”
Section: Introductionmentioning
confidence: 99%