2016
DOI: 10.48550/arxiv.1603.06318
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Harnessing Deep Neural Networks with Logic Rules

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
84
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 110 publications
(85 citation statements)
references
References 22 publications
1
84
0
Order By: Relevance
“…As shown in [10], adaptation of logical knowledge as constraints during the learning process has generated promising results, that reinforces the attempts to use ontologies as background knowledge. The area of neuro-symbolic approaches also provides insights into the use of logical knowledge during the training of artificial neural networks [22].…”
Section: Related Worksupporting
confidence: 54%
See 1 more Smart Citation
“…As shown in [10], adaptation of logical knowledge as constraints during the learning process has generated promising results, that reinforces the attempts to use ontologies as background knowledge. The area of neuro-symbolic approaches also provides insights into the use of logical knowledge during the training of artificial neural networks [22].…”
Section: Related Worksupporting
confidence: 54%
“…Out of these, the use of scene graphs, probabilistic ontologies and firstorder logic rules grab the attention as promising paths to explore. Investigations into the use of background knowledge in the form of first-Order Logic (FOL) is prominently seen in several studies [10].…”
Section: Related Workmentioning
confidence: 99%
“…Domain-specific knowledge of correctness of detection and recognition can be utilized to mitigate requirement of hand labelled data. Parallely there has been related work on using domain knowledge to regularize model posteriors [3,4]. We formalize domain knowledge of correctness of recognized team names, time, or half/quarter as first-order logical rules and refer to them as Knowledge Constraints (KC).…”
Section: Approachmentioning
confidence: 99%
“…In the context of neural-based dialogue systems, this line is pursued by using constrained rules (Jhunjhunwala et al, 2020), logical rules to be used in inductive logic programming (Zhou et al, 2020) or declarative language (Altszyler et al, 2020). These rules and models can be easily included in the existing dialogue state tracking models to guide the training and predictions phases without additional learning parameters (Hu et al, 2016;van Krieken et al, 2020). These models obtain the same advantage of the user simulator and in addition overcome the problem of the evaluation of the user-simulator itself.…”
Section: Background and Related Workmentioning
confidence: 99%