2021
DOI: 10.1002/ail2.39
|View full text |Cite
|
Sign up to set email alerts
|

Explainable neural computation via stack neural module networks

Abstract: In complex inferential tasks like question answering, machine learning models must confront two challenges: the need to implement a compositional reasoning process, and, in many applications, the need for this reasoning process to be interpretable to assist users in both development and prediction. Existing models designed to produce interpretable traces of their decision-making process typically require these traces to be supervised at training time. In this paper, we present a novel neural modular approach t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
0
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 32 publications
(72 reference statements)
0
0
0
Order By: Relevance
“…By concentrating on training component modules and learning how to combine them, this method minimises the need to relearn for every type of issue. The JAPE rules as part of the JAVA interfacing NLP system in this study have similarities in principle to the methodology for composing NMNs modules [38], and may provide the basis for integrating neural based approaches to TTE extraction systems in an interpretable manner. For studies have generally focusing on image based problems, the attention module generally prefers the use of attention extracted from Convolution Neural Networks [39,40].…”
Section: Technical Perspectivementioning
confidence: 99%
“…By concentrating on training component modules and learning how to combine them, this method minimises the need to relearn for every type of issue. The JAPE rules as part of the JAVA interfacing NLP system in this study have similarities in principle to the methodology for composing NMNs modules [38], and may provide the basis for integrating neural based approaches to TTE extraction systems in an interpretable manner. For studies have generally focusing on image based problems, the attention module generally prefers the use of attention extracted from Convolution Neural Networks [39,40].…”
Section: Technical Perspectivementioning
confidence: 99%