Proceedings of the 10th SIGHUM Workshop on Language Technology For Cultural Heritage, Social Sciences, and Humanities 2016
DOI: 10.18653/v1/w16-2117
|View full text |Cite
|
Sign up to set email alerts
|

Towards a text analysis system for political debates

Abstract: Social scientists and journalists nowadays have to deal with an increasingly large amount of data. It usually requires expensive searching and annotation effort to find insight in a sea of information. Our goal is to build a discourse analysis system which can be applied to large text collections. This system can help social scientists and journalists to analyze data and validate their research theories by providing them with tailored machine learning methods to alleviate the annotation effort and exploratory … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 10 publications
0
7
0
Order By: Relevance
“…To answer RQ9, we are motivated by the finding in previous work showing a large term overlap between claims and non-claims [57]. Because of this, we posit that check-worthiness models may face difficulties differentiating between highly similar sentences with opposing labels.…”
Section: Chapter 8: Fact Check-worthiness Detection With Contrastive Rankingmentioning
confidence: 99%
“…To answer RQ9, we are motivated by the finding in previous work showing a large term overlap between claims and non-claims [57]. Because of this, we posit that check-worthiness models may face difficulties differentiating between highly similar sentences with opposing labels.…”
Section: Chapter 8: Fact Check-worthiness Detection With Contrastive Rankingmentioning
confidence: 99%
“…Finally, Le et al (2016) used deep learning. They argued that the top terms in claim vs. nonclaim sentences are highly overlapping in content, which is a problem for bag-of-words approaches.…”
Section: Related Workmentioning
confidence: 99%
“…Thus, the task can be conformed as recognizing textual entailment, which is analyzed in detail in (Dagan et al, 2009). Finally, Le et al (2016) argued that the top terms in claim vs. non-claim sentences are highly overlapping, which is a problem for bag-of-words approaches. Thus, they used a Convolutional Neural Network, where each word is represented by its embedding and each named entity is replaced by its tag, e.g., person, organization, location.…”
Section: Related Workmentioning
confidence: 99%