Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence 2022
DOI: 10.24963/ijcai.2022/730
|View full text |Cite
|
Sign up to set email alerts
|

On the Expressivity of Markov Reward (Extended Abstract)

Abstract: Modern SAT solvers are based on a paradigm named conflict driven clause learning (CDCL), while local search is an important alternative. Although there have been attempts combining these two methods, this work proposes deeper cooperation techniques. First, we relax the CDCL framework by extending promising branches to complete assignments and calling a local search solver to search for a model nearby. More importantly, the local search assignments and the conflict frequency of variables in local search are ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
15
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(16 citation statements)
references
References 31 publications
1
15
0
Order By: Relevance
“…For the two policies, the avg. costs are g(1) = R and g(2) = pS + (1 − p)R. Strangely, we must set R > S in order for g(2) < g (1).…”
Section: Motivating Examplesmentioning
confidence: 99%
See 4 more Smart Citations
“…For the two policies, the avg. costs are g(1) = R and g(2) = pS + (1 − p)R. Strangely, we must set R > S in order for g(2) < g (1).…”
Section: Motivating Examplesmentioning
confidence: 99%
“…In other words, probability-optimal policies are those that satisfy the entirety of the task, both desired and required behaviors, whereas V P π,λ ≡ (J π + λg π )P[π |= ϕ] is the normalized value function 1 , corresponding to a notion of energy or effort required, with λ representing the tradeoff between gain and transient cost. We will often omit the dependence of V on P and λ for brevity.…”
Section: Problem Formulationmentioning
confidence: 99%
See 3 more Smart Citations