2016
DOI: 10.1007/978-3-319-46349-0_32
|View full text |Cite
|
Sign up to set email alerts
|

Stability Evaluation of Event Detection Techniques for Twitter

Abstract: Abstract. Twitter continues to gain popularity as a source of up-todate news and information. As a result, numerous event detection techniques have been proposed to cope with the steadily increasing rate and volume of social media data streams. Although most of these works conduct some evaluation of the proposed technique, comparing their effectiveness is a challenging task. In this paper, we examine the challenges to reproducing evaluation results for event detection techniques. We apply several event detecti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 23 publications
(18 reference statements)
0
4
0
Order By: Relevance
“…Event tracking literature knows well the challenges of evaluating its algorithms. Weiler et al ( 2017 ), whose body of work (Weiler et al, 2015a , b , 2016 , 2019 ) gives a broad overview of the problems, describe the evaluation process itself as “a challenging research question” independent of the event tracking one. Nevertheless, for a problem as essential as measuring progress, we still understand event tracking evaluations poorly.…”
Section: Event Tracking's Manual Evaluationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Event tracking literature knows well the challenges of evaluating its algorithms. Weiler et al ( 2017 ), whose body of work (Weiler et al, 2015a , b , 2016 , 2019 ) gives a broad overview of the problems, describe the evaluation process itself as “a challenging research question” independent of the event tracking one. Nevertheless, for a problem as essential as measuring progress, we still understand event tracking evaluations poorly.…”
Section: Event Tracking's Manual Evaluationsmentioning
confidence: 99%
“…Manual efforts reassert themselves, this time to deter the prospects of implementing someone else's solution (McMinn et al, 2013 ). No one knows the nuanced design of an algorithm better than its author, and innocuous modifications can drastically affect results (Weiler et al, 2016 ; Raff, 2019 ). Thus, the simplest of techniques become pretend-baselines to establish a false state-of-the-art.…”
Section: Issues Of Reproducibilitymentioning
confidence: 99%
“…This choice is just one example that demonstrates the challenging and complex problem of evaluating event detection techniques automatically. As a consequence, proposals [32], [34], [35] for semi-automatic evaluation have been presented. In the following, we present the two major issues of reproducibility of event detection techniques: (i) the difficulty of reproducing the implementations of the techniques itself and (ii) the challenge to reproduce the evaluation or experiments of previous research works.…”
Section: Issues Of Reproducibiltymentioning
confidence: 99%
“…However, even if the source code is available, the diversity of implementations makes it very difficult to recreate the original environment and to exactly reproduce the research results obtained in the original work. In this context, Weiler et al [32] show that minor modifications in the different phases or parameters of event detection techniques can strongly impact the stability of their results. Another challenge is that experiments and evaluations are done in very different ways.…”
Section: Introductionmentioning
confidence: 99%