The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.548
|View full text |Cite
|
Sign up to set email alerts
|

Towards Interpretable Reasoning over Paragraph Effects in Situation

Abstract: We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect described in a background paragraph, and apply the knowledge to a novel situation. Existing works ignore the complicated reasoning process and solve it with a one-step "black box" model. Inspired by human cognitive processes, in this paper we propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules. In particul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…It is noteworthy that there are many other important miscellaneous works we do not mention in the previous sections. For example, numerous works have proposed to improve upon vanilla gradient-based methods [174,178,65]; linguistic rules such as negation, morphological inflection can be extracted by neural models [141,142,158]; probing tasks can used to explore linguistic properties of sentences [3,80,43,75,89,74,34]; the hidden state dynamics in recurrent nets are analysed to illuminate the learned long-range dependencies [73,96,67,179,94]; [169,166,168,101,57,167] studied the ability of neural sequence models to induce lexical, grammatical and syntactic structures; [91,90,12,136,159,24,151,85] modeled the reasoning process of the model to explain model behaviors; [157,139,28,163,219,170,180,137,106,58,162,81...…”
Section: Miscellaneousmentioning
confidence: 99%
“…It is noteworthy that there are many other important miscellaneous works we do not mention in the previous sections. For example, numerous works have proposed to improve upon vanilla gradient-based methods [174,178,65]; linguistic rules such as negation, morphological inflection can be extracted by neural models [141,142,158]; probing tasks can used to explore linguistic properties of sentences [3,80,43,75,89,74,34]; the hidden state dynamics in recurrent nets are analysed to illuminate the learned long-range dependencies [73,96,67,179,94]; [169,166,168,101,57,167] studied the ability of neural sequence models to induce lexical, grammatical and syntactic structures; [91,90,12,136,159,24,151,85] modeled the reasoning process of the model to explain model behaviors; [157,139,28,163,219,170,180,137,106,58,162,81...…”
Section: Miscellaneousmentioning
confidence: 99%
“…Second, our approach computed the attention score for each token in the context and leveraged it softly, and thus less sensitive to boundary detection. However, when we used fuzzy F1 scores as the evaluation metrics (introduced in (Ren et al, 2020), which were marked as 1 as long as the original F1 was not equal to 0), the scores for all modules increased by a large margin, proving the reasoning ability.…”
Section: Reasoning Component Performancementioning
confidence: 99%
“…Gupta et al (2019) extend neural module networks to answer compositional questions. Ren et al (2020) and Liu and Gardner (2020) further introduce neural network modules on one complex reasoning task ROPES . In this work, we explore the effectiveness of neural network modules on a qualitative reasoning task.…”
Section: Related Workmentioning
confidence: 99%