Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence 2023
DOI: 10.24963/ijcai.2023/402
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Coupling of Deep Learning with Logical Reasoning

Abstract: In the ongoing quest for hybridizing discrete reasoning with neural nets, there is an increasing interest in neural architectures that can learn how to solve discrete reasoning or optimization problems from natural inputs. In this paper, we introduce a scalable neural architecture and loss function dedicated to learning the constraints and criteria of NP-hard reasoning problems expressed as discrete Graphical Models. We empirically show our loss function is able to efficiently learn how to solve NP-hard reason… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(11 citation statements)
references
References 0 publications
0
11
0
Order By: Relevance
“…It solves 66 instances instead of 46 without enforcing VAC. 15 Another stronger preprocessing already used in the UAI'2022 competition is to apply virtual pairwise consistency (VPWC) with additional zero-cost ternary cost functions [29] (options -A -pwc=-1 -t=1). Because this preprocessing can be quite time-consuming (2 × 0.48 seconds on average for UAI2022, up to 5.3 seconds on or_chain_41), we applied it on every objective only once before the two-phase method starts.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…It solves 66 instances instead of 46 without enforcing VAC. 15 Another stronger preprocessing already used in the UAI'2022 competition is to apply virtual pairwise consistency (VPWC) with additional zero-cost ternary cost functions [29] (options -A -pwc=-1 -t=1). Because this preprocessing can be quite time-consuming (2 × 0.48 seconds on average for UAI2022, up to 5.3 seconds on or_chain_41), we applied it on every objective only once before the two-phase method starts.…”
Section: Resultsmentioning
confidence: 99%
“…We compared the 13 2pko_0009_multi having 15 states in the Pareto front found in 1,723 seconds with 37 Solve calls 14 We could not test it for SetCover and Knapack due to a limitation in toulbar2. 15 We also tested running VAC (option -A) in preprocessing at every Solve, instead of only once on the quadratic objective before running the two-phase method, but it solved optimally one less instance (65). 16 CPU-times reported in Tables 1 and 2 do not include this preprocessing time.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…29 Effie is also far more accurate than Rosetta’s full-atom scoring function in reconstructing the sequence of natural proteins. 28 It also improves over the related TERMinator score function, 30 with a more efficient architecture and an enhanced loss function. 28 The resulting hybrid architecture combines the accuracy of deep learning with the ability of automated reasoning engines to satisfy extra constraints capturing design requirements, without the limitations of black-box autoregressive models.…”
Section: Introductionmentioning
confidence: 96%
“…To address this issue, we developed a hybrid generative AI approach combining Effie, a recent deep-learned pairwise decomposable score function for sequence design, 28 with an automated reasoning design tool capable of optimizing this function. Because of its pairwise decomposable nature, Effie is similar to pairwise decomposable physics-based score functions, as available in Rosetta.…”
Section: Introductionmentioning
confidence: 99%