Proceedings of the 28th ACM Conference on Hypertext and Social Media 2017
DOI: 10.1145/3078714.3078725
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Typical Histories of Entities by Multi-Timeline Summarization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
2
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(11 citation statements)
references
References 16 publications
0
11
0
Order By: Relevance
“…We detect the temporal expressions by using spaCy 5 tool. We adopt this extraction method motivated by the previous work [3,8]. However, when preprocessing datasets, more refined methods for associating time with sentences could be applied (e.g., [16]).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We detect the temporal expressions by using spaCy 5 tool. We adopt this extraction method motivated by the previous work [3,8]. However, when preprocessing datasets, more refined methods for associating time with sentences could be applied (e.g., [16]).…”
Section: Discussionmentioning
confidence: 99%
“…To analyze how the number of categories k affects the quality of identified exemplars, we investigate how Ratio and AveImp vary per-k. Figure 3b shows the results. Recall that the number of exemplars is equal to the number of categories and k is set in the range [2,8].…”
Section: ) Effects Of the Number Of Latent Categoriesmentioning
confidence: 99%
“…Methods relying on event detection such as Ge et al (2015); Minard et al (2015); Bedi et al (2017) often evaluate their system in terms of Precision, Recall and F1-measure. However, most projects often lack datasets and must then resort to human evaluation as in Duan et al (2017); Swan and Allan (2000); Tran et al (2015a).…”
Section: Timeline Summarizationmentioning
confidence: 99%
“…TLS is a subfield of the Multi-Document Summarization (MDS) task and has been studied extensively in the NLP community: for instance, Swan and Allan (2000) generate clusters of Named Entities and noun chunks that best describe major news topics covered in a subset of the TDT-2 dataset (Allan et al, 1998), which contains text transcripts of broadcast news spanning from January 1, 1998, to June 30, 1998, in English;Nguyen et al (2014) generate timelines by detecting events that are the most relevant to a user query. They apply their methodology on a dataset of newswire texts in English covering the 2004-2011 period provided by the AFP French news agency; Duan et al (2017) extend these methods to summarize the common history of similar entities such as Japanese Cities or French scientists. Examples of timelines generated by such methods are shown in Figure 1.…”
Section: Introduction 1exploring Archivesmentioning
confidence: 99%
“…10 https://yurikopia.com/disco/ events [1], the characteristics of life events that are highly transformative and iconic [2], and the systematic differences in life structure across groups [3], etc. In particular, Wikipedia has been used extensively, for the task of disambiguition of named entities [34], [35], the recognition of biographical sentences [36], the identification of latent biographical structure [8], and the summarization of typical life trajactories and events [37], [38], etc.…”
Section: Related Workmentioning
confidence: 99%