2019
DOI: 10.1609/aaai.v33i01.33011641
|View full text |Cite
|
Sign up to set email alerts
|

Low-Rank Semidefinite Programming for the MAX2SAT Problem

Abstract: This paper proposes a new algorithm for solving MAX2SAT problems based on combining search methods with semidefinite programming approaches. Semidefinite programming techniques are well-known as a theoretical tool for approximating maximum satisfiability problems, but their application has traditionally been very limited by their speed and randomized nature. Our approach overcomes this difficult by using a recent approach to low-rank semidefinite programming, specialized to work in an incremental fashion suita… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
13
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(13 citation statements)
references
References 24 publications
0
13
0
Order By: Relevance
“…It minimizes J sat w (4) by Newton's method using (5). In addition to Set-E and Set-F, we use another test set Set-G from SATLIB [14] in the experiment 29 containing 100 instances of 5-SAT with m = 3100 clauses in n = 500 variables. The experimental result is summarized in Table 4.…”
Section: Weighted Variables and Clausesmentioning
confidence: 99%
See 1 more Smart Citation
“…It minimizes J sat w (4) by Newton's method using (5). In addition to Set-E and Set-F, we use another test set Set-G from SATLIB [14] in the experiment 29 containing 100 instances of 5-SAT with m = 3100 clauses in n = 500 variables. The experimental result is summarized in Table 4.…”
Section: Weighted Variables and Clausesmentioning
confidence: 99%
“…Wang et. al built a MAXSAT solver based on the combination of semidefinite programming relaxation and branch-and-bound strategy [29]. Selsam et al proposed a neural net classifier NeuroSAT for SAT problems that learn embeddings of a SAT problem through three perceptrons and two LSTMs so that the system predicts one bit, i.e., the satisfiability of the problem [27].…”
Section: Introductionmentioning
confidence: 99%
“…Besides, several efforts have been put on relaxing logical constraints such as soft constraint learning [36]. [43] use semidefinite programming (SDP) [42] to relax the maxSAT problem and cooperate with deep learning. Another way to relax the logical constraints is to relax the Boolean variables to be continuous variables [23,22], or continuous random variables like probabilistic soft logic [19,1,13].…”
Section: Related Workmentioning
confidence: 99%
“…More recent work, e.g. Wang et al (2017); Wang & Kolter (2019), has developed low-rank SDP solvers for general MAXSAT problems. We extend the work of Wang et al (2017) to create a differentiable optimization-based MAXSAT solver that can be employed in the loop of deep learning.…”
Section: Related Workmentioning
confidence: 99%