Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management 2014
DOI: 10.1145/2661829.2661918
|View full text |Cite
|
Sign up to set email alerts
|

Using Crowdsourcing to Investigate Perception of Narrative Similarity

Abstract: For many applications measuring the similarity between documents is essential. However, little is known about how users perceive similarity between documents. This paper presents the first large-scale empirical study that investigates perception of narrative similarity using crowdsourcing. As a dataset we use a large collection of Dutch folk narratives. We study the perception of narrative similarity by both experts and non-experts by analyzing their similarity ratings and motivations for these ratings. While … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…To do this, we recruited third-party raters from MTurk to compare the ads and the corresponding exemplars. Past studies also relied on raters on microtask platforms to evaluate the similarity between objects (Dow et al 2011, Rahmanian andDavis 2014), and such raters could perform reliable similarity evaluations despite their (possible) lack of expertise with the objects being compared (Nguyen et al 2014). Following Dow et al (2011), we asked MTurk raters to compare the similarity between an ad and an assigned exemplar on a seven-point scale.…”
Section: Measures Exemplar Adoption (By Solvers)mentioning
confidence: 99%
“…To do this, we recruited third-party raters from MTurk to compare the ads and the corresponding exemplars. Past studies also relied on raters on microtask platforms to evaluate the similarity between objects (Dow et al 2011, Rahmanian andDavis 2014), and such raters could perform reliable similarity evaluations despite their (possible) lack of expertise with the objects being compared (Nguyen et al 2014). Following Dow et al (2011), we asked MTurk raters to compare the similarity between an ad and an assigned exemplar on a seven-point scale.…”
Section: Measures Exemplar Adoption (By Solvers)mentioning
confidence: 99%
“…Additionally, their method does not generalize well to other story types (or even movie plots) since they include specific movie parameters, like characters' name and gender, as the basis of their solution, which does not apply to our case since we do not attempt to match stories based on these surface-level indicators. The closest work to ours was done by Nguyen et al (2014), who proposed a set of crowdsourcing tasks to analyze perception of similarity in folk narratives. They tried various approaches to retrieve these narratives.…”
Section: Related Workmentioning
confidence: 99%
“…There have been some attempts to match stories (Nguyen et al, 2014;Chaturvedi et al, 2018) and to understand human judgment about matched stories (Nguyen et al, 2014;Reagan et al, 2016). Nevertheless, these efforts have been mostly developed in supervised scenarios that already have a set of matched stories in hand, and they are mostly focused on non-personal narratives (e.g., fictional).…”
Section: Introductionmentioning
confidence: 99%
“…Also noteworthy in this context is the Aarne-Thompson classification system (Aarne and Thompson, 1961), which has been extensively used in the analysis of folk-tales to organize types of stories, based on an index of motifs. Our work is most closely related to that of Nguyen et al (2014) who attempt to understand the various dimensions that experts and non-experts consider while judging narrative similarity.…”
Section: Related Workmentioning
confidence: 99%