Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing 2015
DOI: 10.18653/v1/d15-1189
|View full text |Cite
|
Sign up to set email alerts
|

Event Detection and Factuality Assessment with Non-Expert Supervision

Abstract: Events are communicated in natural language with varying degrees of certainty. For example, if you are "hoping for a raise," it may be somewhat less likely than if you are "expecting" one. To study these distinctions, we present scalable, highquality annotation schemes for event detection and fine-grained factuality assessment. We find that non-experts, with very little training, can reliably provide judgments about what events are mentioned and the extent to which the author thinks they actually happened. We … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
68
0
1

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 53 publications
(70 citation statements)
references
References 13 publications
1
68
0
1
Order By: Relevance
“…The rationale behind (i) is that true should be associated with positive values; false should be associated with negative values; and the confidence rating should control how far from zero the normalized rating is, adjusting for the biases of annotators that responded to a particular item. The resulting response scale is analogous to current approaches to event factuality annotation (Lee et al, 2015;Stanovsky et al, 2017;Rudinger et al, 2018). We obtain a normalized score from these models by setting the Best Linear Unbiased Predictors for the by-annotator random effects to zero and using the Best Linear Unbiased Estimators for the fixed effects to obtain a real-valued label for each token on each property.…”
Section: Comparison To Standard Ontologymentioning
confidence: 99%
“…The rationale behind (i) is that true should be associated with positive values; false should be associated with negative values; and the confidence rating should control how far from zero the normalized rating is, adjusting for the biases of annotators that responded to a particular item. The resulting response scale is analogous to current approaches to event factuality annotation (Lee et al, 2015;Stanovsky et al, 2017;Rudinger et al, 2018). We obtain a normalized score from these models by setting the Best Linear Unbiased Predictors for the by-annotator random effects to zero and using the Best Linear Unbiased Estimators for the fixed effects to obtain a real-valued label for each token on each property.…”
Section: Comparison To Standard Ontologymentioning
confidence: 99%
“…This 7 http://scikit-learn.org/ Table 2: Performance of the baselines against our new supervised model (bottom). † The performance of UW features on MEANTIME and FactBank uses a different solver from that in Lee et al (2015). See Section 5 for details.…”
Section: Supervised Setting Improves Performancementioning
confidence: 99%
“…In a separate attempt which we will call UW system, Lee et al (2015) have used SVM regression techniques to predict a continuous factuality value from lexical and syntactic features (lemma, part of speech, and dependency paths). Similarly to the TruthTeller approach, they also predict a single factuality value pertaining to the author's commitment towards the predicate.…”
Section: Introductionmentioning
confidence: 99%
“…Soni et al (2014) target the factuality of quotes (direct and indirect) in Twitter. Lee et al (2015) detect events and assess factuality using easy-to-understand short instructions to crowdsource annotations. Unlike us, they annotate factuality at the individual token level, where annotated tokens are deemed events by annotators.…”
Section: Previous Workmentioning
confidence: 99%