Proceedings of the Third Workshop on Argument Mining (ArgMining2016) 2016
DOI: 10.18653/v1/w16-2809
|View full text |Cite
|
Sign up to set email alerts
|

Neural Attention Model for Classification of Sentences that Support Promoting/Suppressing Relationship

Abstract: Evidences that support a claim "a subject phrase promotes or suppresses a value" help in making a rational decision. We aim to construct a model that can classify if a particular evidence supports a claim of a promoting/suppressing relationship given an arbitrary subject-value pair. In this paper, we propose a recurrent neural network (RNN) with an attention model to classify such evidences. We incorporated a word embedding technique in an attention model such that our method generalizes for never-encountered … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 12 publications
(7 reference statements)
0
3
0
Order By: Relevance
“…Liu et al (2016) propose two models that capture the interdependencies between two parallel LSTMs encoding the two sentences for the tasks of recognizing textual entailment and matching questions and answers, respectively. A bidirectional recurrent neural network (BiRNN) with a word embedding-based attention model is used to determine whether a piece of evidence supports the claim of a support/attack relation using a data set of 1,000 pairs of sentences in Koreeda et al (2016). In addition, Bosc, Cabrio, and Villata (2016) use a corpus consisting of tweets to determine attack and support relations between tweets.…”
Section: Argument Miningmentioning
confidence: 99%
See 1 more Smart Citation
“…Liu et al (2016) propose two models that capture the interdependencies between two parallel LSTMs encoding the two sentences for the tasks of recognizing textual entailment and matching questions and answers, respectively. A bidirectional recurrent neural network (BiRNN) with a word embedding-based attention model is used to determine whether a piece of evidence supports the claim of a support/attack relation using a data set of 1,000 pairs of sentences in Koreeda et al (2016). In addition, Bosc, Cabrio, and Villata (2016) use a corpus consisting of tweets to determine attack and support relations between tweets.…”
Section: Argument Miningmentioning
confidence: 99%
“…Several types of deep learning architectures have been used in AM or similar tasks where sentence pairs need to be classified. These include LSTMs (Bowman et al 2015(Bowman et al , 2016Liu et al 2016); encoder-decoder LSTMs (Bosc, Cabrio, and Villata 2016), attentional LSTMs (Rocktäschel et al 2015;Koreeda et al 2016;Liu et al 2016), which use a soft attention mechanism so that the representation of one piece of text depends on the representation of the other piece of text; and (attention-based) convolutional neural networks (Habernal and Gurevych 2016;Yin et al 2016).…”
Section: Architecturementioning
confidence: 99%
“…Liu et al (2016) proposed two models capturing the interdependencies between the two parallel LSTMs encoding two input sentences for the tasks of recognising textual entailment and matching questions and answers. Further, Koreeda et al (2016) used a BiRNN with a word-embeddingbased attention model to determine whether a piece of an evidence supports a claim that a phrase promotes or suppresses a value, using a dataset of 1000 pairs.…”
Section: Related Workmentioning
confidence: 99%