2015
DOI: 10.1007/978-3-319-28460-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Argument Mining: A Machine Learning Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
43
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(43 citation statements)
references
References 34 publications
0
43
0
Order By: Relevance
“…Employing machine learning and a set of features representing sentences, the goal is to discard sentences that are not part (or do not contain a component) of an argument. As reported also by Lippi and Torroni (2015a), the vast majority of existing approaches employ "classic, off-the-self" classifiers, while most of the effort is devoted to highly engineered features. A plethora of learning algorithms have been applied on the task, including Naive Bayes (Moens et al, 2007;Park and Cardie, 2014), Support Vector Machines (SVM) (Mochales and Moens, 2011;Rooney et al, 2012;Park and Cardie, 2014;Stab and Gurevych, 2014b;Lippi and Torroni, 2015b), Maximum Entropy (Mochales and Moens, 2011), Logistic Regression (Goudas et al, 2014(Goudas et al, , 2015Levy et al, 2014), Decision Trees and Random Forests (Goudas et al, 2014(Goudas et al, , 2015Stab and Gurevych, 2014b).…”
Section: Related Workmentioning
confidence: 92%
See 1 more Smart Citation
“…Employing machine learning and a set of features representing sentences, the goal is to discard sentences that are not part (or do not contain a component) of an argument. As reported also by Lippi and Torroni (2015a), the vast majority of existing approaches employ "classic, off-the-self" classifiers, while most of the effort is devoted to highly engineered features. A plethora of learning algorithms have been applied on the task, including Naive Bayes (Moens et al, 2007;Park and Cardie, 2014), Support Vector Machines (SVM) (Mochales and Moens, 2011;Rooney et al, 2012;Park and Cardie, 2014;Stab and Gurevych, 2014b;Lippi and Torroni, 2015b), Maximum Entropy (Mochales and Moens, 2011), Logistic Regression (Goudas et al, 2014(Goudas et al, , 2015Levy et al, 2014), Decision Trees and Random Forests (Goudas et al, 2014(Goudas et al, , 2015Stab and Gurevych, 2014b).…”
Section: Related Workmentioning
confidence: 92%
“…supports, attacks) among these components in texts. Primarily aiming to extract arguments from texts in order to provide structured data for computational models of argument and reasoning engines (Lippi and Torroni, 2015a), argument mining has additionally the potential to support applications in various research fields, such as opinion mining (Goudas et al, 2015), stance detection (Hasan and Ng, 2014), policy modelling (Florou et al, 2013;Goudas et al, 2014), legal information systems (Palau and Moens, 2009), etc. Argument mining is usually addressed as a pipeline of several sub-tasks. Typically the first sub-task is the separation between argumentative and non-argumentative text units, which can be performed at various granularity levels, from clauses to several sentences, usually depending on corpora characteristics.…”
Section: Introductionmentioning
confidence: 99%
“…It is very difficult to extract properly formed arguments in online discussions and the absence of proper annotated corpora for automatic identification of these arguments is problematic. According to Lippi and Torroni (2015a) who have made a survey of the various works carried out in argument mining so far with an emphasis on the different machine learning approaches used, the two main approaches in argument mining relate to the extraction of abstract arguments (Cabrio and Villata, 2012;Yaglikci and Torroni, 2014) and structured arguments.…”
Section: Related Workmentioning
confidence: 99%
“…Examining arguments that are found in natural language texts quickly leads to the recognition that many such arguments are incomplete (Lippi and Torroni, 2015a). That is if you consider an argument to be a set of premises and a conclusion that follows from those premises, one or more of these elements can be missing in natural language texts.…”
Section: Introductionmentioning
confidence: 99%
“…see [33,37,27] for overviews). In our approach, we mine arguments and relationships between them from reviews to get AFs.…”
Section: Introductionmentioning
confidence: 99%