The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 1 2017
DOI: 10.18653/v1/e17-1067
|View full text |Cite
|
Sign up to set email alerts
|

On-demand Injection of Lexical Knowledge for Recognising Textual Entailment

Abstract: We approach the recognition of textual entailment using logical semantic representations and a theorem prover. In this setup, lexical divergences that preserve semantic entailment between the source and target texts need to be explicitly stated. However, recognising subsentential semantic relations is not trivial. We address this problem by monitoring the proof of the theorem and detecting unprovable sub-goals that share predicate arguments with logical premises. If a linguistic relation exists, then an approp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
4

Relationship

3
5

Authors

Journals

citations
Cited by 28 publications
(32 citation statements)
references
References 25 publications
0
32
0
Order By: Relevance
“…Systems: As inference is closely related to logic, there has always been a line of research building logic-based or logic-and-machine-learning hybrid models for NLI/RTE problems (e.g. MacCartney, 2009;Abzianidze, 2015;Martínez-Gómez et al, 2017;Yanaka et al, 2018; Re-implementations of these transformer models for Chinese have led to similar successes on related tasks. For example, Cui et al (2019) report that a large RoBERTa model , pre-trained with whole-word masking, achieves the highest accuracy (81.2%) among their transformer models on XNLI.…”
Section: Related Workmentioning
confidence: 99%
“…Systems: As inference is closely related to logic, there has always been a line of research building logic-based or logic-and-machine-learning hybrid models for NLI/RTE problems (e.g. MacCartney, 2009;Abzianidze, 2015;Martínez-Gómez et al, 2017;Yanaka et al, 2018; Re-implementations of these transformer models for Chinese have led to similar successes on related tasks. For example, Cui et al (2019) report that a large RoBERTa model , pre-trained with whole-word masking, achieves the highest accuracy (81.2%) among their transformer models on XNLI.…”
Section: Related Workmentioning
confidence: 99%
“…In ccg2lambda, two wide-coverage CCG parsers, C&C (Clark and Curran, 2007) and Easy-CCG (Lewis and Steedman, 2014), are used for converting tokenized sentences into CCG trees robustly. According to a previous study (Martínez-Gómez et al, 2017), EasyCCG achieves higher accuracy. Thus, when the output of both C&C and EasyCCG can be proved, we use EasyCCG's output for creating features.…”
Section: Related Workmentioning
confidence: 82%
“…The inference system implemented in ccg2lambda using Coq achieves efficient automatic inference by feeding a set of predefined tactics and user-defined proof-search tactics to its interactive mode. The natural deduction system is particularly suitable for injecting external axioms during the theorem-proving process (Martínez-Gómez et al, 2017).…”
Section: System Overviewmentioning
confidence: 99%
“…As mentioned earlier, these systems try to prove whether T entails H, by applying a theorem prover to the logical formulas converted from the CCG trees. We report results for ccg2lambda with the default settings (with SPSA abduction; Martínez-Gómez et al (2017)) and results for two versions of Lang-Pro, one which is described in Abzianidze (2015) (henceforth we refer to it as LangPro15) and the other in Abzianidze (2017) (LangPro17). 5 Briefly, the difference between the two versions is that LangPro17 is more robust to parse errors.…”
Section: Experimental Settingsmentioning
confidence: 99%