2021
DOI: 10.1609/aaai.v35i10.17081
|View full text |Cite
|
Sign up to set email alerts
|

Physarum Powered Differentiable Linear Programming Layers and Applications

Abstract: Consider a learning algorithm, which involves an internal call to an optimization routine such as a generalized eigenvalue problem, a cone programming problem or even sorting. Integrating such a method as layers within a trainable deep network in a numerically stable way is not simple – for instance, only recently, strategies have emerged for eigendecomposition and differentiable sorting. We propose an efficient and differentiable solver for general linear programming problems which can be used in a plug and p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 55 publications
(77 reference statements)
0
2
0
Order By: Relevance
“…where S is the goodness-of-fit of the match pair in the lowquality set, and M and N are the rows and columns of the low goodness-of-fit set. Since the solution of the above large-scale matching pair optimization problem presents a discrete distribution, we use the LP Solver (Linear Program Solver) algorithm [29] to solve the Fit matrix…”
Section: B Feature Matching Optimization Layermentioning
confidence: 99%
“…where S is the goodness-of-fit of the match pair in the lowquality set, and M and N are the rows and columns of the low goodness-of-fit set. Since the solution of the above large-scale matching pair optimization problem presents a discrete distribution, we use the LP Solver (Linear Program Solver) algorithm [29] to solve the Fit matrix…”
Section: B Feature Matching Optimization Layermentioning
confidence: 99%
“…The amounts of data and computational power required for learning have increased. Deep learning uses DNNs with hundreds of layers and a large number of parameters related to structure [145][146][147][148][149][150][151][152][153][154][155][156][157][158][159][160][161]. Therefore, it is prone to overfitting, which is a condition where the learning data are overfitted, generalization is not possible, and high accuracy cannot be achieved with unknown data.…”
Section: Amounts Of Data and Computational Powermentioning
confidence: 99%