2020
DOI: 10.1016/j.artint.2019.103229
|View full text |Cite
|
Sign up to set email alerts
|

Definability for model counting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 22 publications
0
18
0
Order By: Relevance
“…Finally, though Decision-DNNF does not offer FO in the general case, wherever the forgetting transformation is used for solving the XAI queries considered above, it concerns variables that are defined from unforgotten ones (those of X). In such a case, applying the forgetting algorithm that consists in replacing every decision node labelled by a variable from X by a ∨-node (while keeping the same two children) turns the Decision-DNNF cir-6 In addition, DNNF languages are quite succinct -actually, more than other candidates like OBDD or FBDD (Darwiche and Marquis 2002;Bova et al 2016) -, and there exist compilers targeting those languages when Σ is given at start as a CNF formula (Darwiche 2001;Darwiche 2004;Pipatsrisawat and Darwiche 2010;Muise et al 2012;Lagniez and Marquis 2017) cuit at hand into a d-DNNF circuit (see (Lagniez, Lonca, and Marquis 2020) for details). As a consequence, DPI is tractable when Σ is given as a Decision-DNNF circuit, and IIR and IMO are tractable when Σ is given as a structured Decision-DNNF circuit.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, though Decision-DNNF does not offer FO in the general case, wherever the forgetting transformation is used for solving the XAI queries considered above, it concerns variables that are defined from unforgotten ones (those of X). In such a case, applying the forgetting algorithm that consists in replacing every decision node labelled by a variable from X by a ∨-node (while keeping the same two children) turns the Decision-DNNF cir-6 In addition, DNNF languages are quite succinct -actually, more than other candidates like OBDD or FBDD (Darwiche and Marquis 2002;Bova et al 2016) -, and there exist compilers targeting those languages when Σ is given at start as a CNF formula (Darwiche 2001;Darwiche 2004;Pipatsrisawat and Darwiche 2010;Muise et al 2012;Lagniez and Marquis 2017) cuit at hand into a d-DNNF circuit (see (Lagniez, Lonca, and Marquis 2020) for details). As a consequence, DPI is tractable when Σ is given as a Decision-DNNF circuit, and IIR and IMO are tractable when Σ is given as a structured Decision-DNNF circuit.…”
Section: Discussionmentioning
confidence: 99%
“…An interesting direction to address this scalability challenge is to investigate whether a component caching-based scheme operating natively over the space of MSSes, i.e., avoiding the reduction to model counting, can lead to a better runtime efficiency. Another line of future work is to evaluate other contemporary projected model counting tools such as nestHDB (Hecher, Thier, and Woltran 2020) or projMC (Lagniez and Marquis 2019), and to employ preprocessing techniques such as (Manthey 2012) or (Lagniez, Lonca, and Marquis 2020). Finally, we plan to examine an extension of our MSS counting approach to other constraint domains where MSSes find an application, e.g., F can be a set of LTL (Barnat et al 2016;Bendík 2017) or SMT (Guthmann, Strichman, and Trostanetski 2016) formulas.…”
Section: Discussionmentioning
confidence: 99%
“…Finally, since the performances of D4 over many CNF instances are typically boosted when those instances have been first preprocessed using pmc (Lagniez and Marquis 2017b) (or B+E (Lagniez, Lonca, and Marquis 2020) when D4 is used as a model counter), it would be useful to develop and evaluate certification techniques for such preprocessors. This would permit to take advantage of them upstream to CD4, while maintaining the certification requirement.…”
Section: Discussionmentioning
confidence: 99%