2017
DOI: 10.48550/arxiv.1707.08608
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gradient-based Inference for Networks with Output Constraints

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The second, more general method to build constraint-aware neural networks is based on the idea of penalizing deviations from the constraint (11) during the training procedure (as mentioned in [27,22]) . To this end, we introduce the constraint loss function…”
Section: Constraint-adapted Loss Methods (Cal)mentioning
confidence: 99%
See 2 more Smart Citations
“…The second, more general method to build constraint-aware neural networks is based on the idea of penalizing deviations from the constraint (11) during the training procedure (as mentioned in [27,22]) . To this end, we introduce the constraint loss function…”
Section: Constraint-adapted Loss Methods (Cal)mentioning
confidence: 99%
“…which will be the constraint that a surrogate model for (21) should satisfy. As in (12) the constraint target function Φ cubic : × × , corresponding to (22), is given by…”
Section: Cubic Flux Model: Undercompressive Wavementioning
confidence: 99%
See 1 more Smart Citation