Proceedings of the 28th International Conference on Computational Linguistics 2020
DOI: 10.18653/v1/2020.coling-main.101
|View full text |Cite
|
Sign up to set email alerts
|

Exploring End-to-End Differentiable Natural Logic Modeling

Abstract: We explore end-to-end trained differentiable models that integrate natural logic with neural networks, aiming to keep the backbone of natural language reasoning based on the natural logic formalism while introducing subsymbolic vector representations and neural components. The proposed model adapts module networks to model natural logic operations, which is enhanced with a memory component to model contextual information. Experiments show that the proposed framework can effectively model monotonicity-based rea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(29 citation statements)
references
References 40 publications
0
28
0
1
Order By: Relevance
“…We conducted experiments on six datasets: SNLI (Bowman et al, 2015), HELP (Yanaka et al, 2019b), MED (Yanaka et al, 2019a), MoNLI (Geiger et al, 2020), NatLog-2hop (Feng et al, 2020), and a compositional generalization dataset (Yanaka et al, 2020). The results show the model's superior capability in monotonicity inferences, systematic generalization, and interpretability, compared to previous models on these existing datasets, while the model remains a competitive performance on the generic SNLI test set.…”
Section: * Equal Contributionmentioning
confidence: 97%
See 1 more Smart Citation
“…We conducted experiments on six datasets: SNLI (Bowman et al, 2015), HELP (Yanaka et al, 2019b), MED (Yanaka et al, 2019a), MoNLI (Geiger et al, 2020), NatLog-2hop (Feng et al, 2020), and a compositional generalization dataset (Yanaka et al, 2020). The results show the model's superior capability in monotonicity inferences, systematic generalization, and interpretability, compared to previous models on these existing datasets, while the model remains a competitive performance on the generic SNLI test set.…”
Section: * Equal Contributionmentioning
confidence: 97%
“…Specifically for natural language, natural logic has long been studied to model reasoning in human language (Lakoff, 1970;van Benthem, 1988;Valencia, 1991;Van Benthem, 1995;Nairn et al, 2006;MacCartney, 2009;MacCartney and Manning, 2009;Icard, 2012;Angeli and Manning, 2014). However, the work of investigating the joint advantage of neural networks and natural logic is sparse (Feng et al, 2020) (See Sec. 2 for more details) and understudied.…”
Section: * Equal Contributionmentioning
confidence: 99%
“…Another study explored a symbolic intermediate representation for neural surface realisation (Elder et al, 2019), that is similar to first-order logic. Moreover, a recent attempt adapted module networks to model natural logic operations, which is enhanced with a memory component to model contextual information (Feng et al, 2020). Furthermore, RuleNN (Sen et al, 2020) is developed to tackle sentence classification where models are in the form of first-order logic, and achieved performance, that is comparable to some neural models.…”
Section: Model Interpretabilitymentioning
confidence: 99%
“…Another study explored a symbolic intermediate representation for neural surface realisation [Elder et al, 2019] that is similar to first-order logic. Moreover, a recent attempt adapted module networks to model natural logic operations, which is enhanced with a memory component to model contextual information [Feng et al, 2020]. Furthermore, Ru-leNN [Sen et al, 2020] is developed to tackle sentence classification where models are in the form of first-order logic, and achieved performance that is comparable to some neural models.…”
Section: Claim Validationmentioning
confidence: 99%