2012
DOI: 10.1287/ijoc.1110.0459
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Branch-and-Bound Method for Maximum Monomial Agreement

Abstract: The N P-hard maximum monomial agreement (MMA) problem consists of finding a single logical conjunction that best fits a weighted dataset of "positive" and "negative" binary vectors. Computing classifiers using boosting methods involves a maximum agreement subproblem at each iteration, although such subproblems are typically solved by heuristic methods. Here, we describe an exact branch and bound method for maximum agreement over Boolean monomials, improving on the earlier work of Goldberg and Shan [14]. In par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 26 publications
(22 reference statements)
0
6
0
Order By: Relevance
“…We propose to directly work with a loss function of the form as given in (1) (and variations) and solve the non-convex combinatorial optimization problem with state-of-the-art integer programming (IP) techniques including column generation. This approach generalizes previous linear programming based approaches (and hence implicitly convex approaches) in, e.g., [14,21,22,16], while solving classification problems with the true misclassification loss. We acknowledge that (1) is theoretically very hard (in fact NP-hard as shown, e.g., in [21]), however, we hasten to stress that in real-world computations for specific instances the behavior is often much better than the theoretical asymptotic complexity.…”
Section: Introductionmentioning
confidence: 72%
See 1 more Smart Citation
“…We propose to directly work with a loss function of the form as given in (1) (and variations) and solve the non-convex combinatorial optimization problem with state-of-the-art integer programming (IP) techniques including column generation. This approach generalizes previous linear programming based approaches (and hence implicitly convex approaches) in, e.g., [14,21,22,16], while solving classification problems with the true misclassification loss. We acknowledge that (1) is theoretically very hard (in fact NP-hard as shown, e.g., in [21]), however, we hasten to stress that in real-world computations for specific instances the behavior is often much better than the theoretical asymptotic complexity.…”
Section: Introductionmentioning
confidence: 72%
“…In order to control complexity, overfitting, and generalization of the model typically some sparsity is enforced. Previous approaches in the context of LP-based boosting have promoted sparsity by means of cutting planes, see, e.g., [21,22,16]. Sparsification can be handled in our approach by solving a delayed integer program using additional cutting planes.…”
Section: Contribution and Related Workmentioning
confidence: 99%
“…We propose a linear programming model which is inspired by LP boosting methods for classification using classical column generation techniques [Demiriz et al, 2002, Golderg, 2012, Eckstein and Goldberg, 2012, Eckstein et al, 2019, Dash et al, 2018. The goal of our model is to create a weighted combination of first-order logic rules (similar to the one proposed in Yang et al [2017]) to be used as a prediction function for the task of knowledge graph link prediction.…”
Section: Modelmentioning
confidence: 99%
“…In Goldberg and Shan (2007) (see also Eckstein and Goldberg 2009), the same idea of boosting a family of classifiers is considered, but this time the family of classifiers corresponds to the set of patterns. More precisely, with each pattern P is associated a weak hypothesis H P : {0, 1} n → {+1, −1, 0} defined by…”
Section: Relation With Boostingmentioning
confidence: 99%