Proceedings of the 11th Joint Conference on Lexical and Computational Semantics 2022
DOI: 10.18653/v1/2022.starsem-1.2
|View full text |Cite
|
Sign up to set email alerts
|

DeepA2: A Modular Framework for Deep Argument Analysis with Pretrained Neural Text2Text Language Models

Abstract: In this paper, we present and implement a multidimensional, modular framework for performing deep argument analysis (DeepA2) using current pre-trained language models (PTLMs). ArgumentAnalyst -a T5 model (Raffel et al., 2020) set up and trained within DeepA2 -reconstructs argumentative texts, which advance an informal argumentation, as valid arguments: It inserts, e.g., missing premises and conclusions, formalizes inferences, and coherently links the logical reconstruction to the source text. We create a synth… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…Logical reasoning with LLMs and artificially controlled experiments: Integrating logical reasoning ability into neural models is a pivotal goal in the artificial intelligence field (Marcus, 2003). With this aim, enclosing the models' exact weakness with artificially controlled data has been actively conducted in our field (Betz et al, 2021;Clark et al, 2020;Lu et al, 2021;Kudo et al, 2023); we show the peculiar case that just the flip of one word (adding a nation prefix) causes drastic effects for modern LLMs.…”
Section: Related Workmentioning
confidence: 82%
“…Logical reasoning with LLMs and artificially controlled experiments: Integrating logical reasoning ability into neural models is a pivotal goal in the artificial intelligence field (Marcus, 2003). With this aim, enclosing the models' exact weakness with artificially controlled data has been actively conducted in our field (Betz et al, 2021;Clark et al, 2020;Lu et al, 2021;Kudo et al, 2023); we show the peculiar case that just the flip of one word (adding a nation prefix) causes drastic effects for modern LLMs.…”
Section: Related Workmentioning
confidence: 82%
“…Automated reasoning has been a challenging task in NLP. Before the era of LLMs, the prevalent approaches to logical reasoning were based on fine-tuning pre-trained models (Clark, Tafjord, and Richardson 2020;Betz, Voigt, and Richardson 2021;Han et al 2022). However, these methods often led to unrealistic inferences due to implicit label-data correlations (Zhang et al 2023).…”
Section: Reasoning With Llmsmentioning
confidence: 99%
“…Another straightforward approach for text-based logical reasoning is to first translate natural language statements into formal logic expressions and then use a formal logic inference engine. A lot of efforts have been made in this direction (Weber et al, 2019;Levkovskyi & Li, 2021;Lu et al, 2022a;Betz & Richardson, 2022), and we also tried it in our experiments. However, it turns out to be very challenging to map natural language to formal logic: the translated formal logic expressions often bear subtle issues (e.g., naming variation) such that a formal logic engine won't be able to pattern-match them well.…”
Section: Related Workmentioning
confidence: 99%