Proceedings of the First Workshop on Scholarly Document Processing 2020
DOI: 10.18653/v1/2020.sdp-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Effective distributed representations for academic expert search

Abstract: Expert search aims to find and rank experts based on a user's query. In academia, retrieving experts is an efficient way to navigate through a large amount of academic knowledge. Here, we study how different distributed representations of academic papers (i.e. embeddings) impact academic expert retrieval. We use the Microsoft Academic Graph dataset and experiment with different configurations of a document-centric voting model for retrieval. In particular, we explore the impact of the use of contextualized emb… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 33 publications
0
4
0
Order By: Relevance
“…We then derive the experts from the sets of authors of these papers using an approach where each retrieved paper contributes an exponentially weighted vote for an author, with a factor that reduces the bias towards highly prolific authors. Our experiments, described in detail in (Berger et al, 2020) show that these modern Transformer-based contextualized embeddings outperform TF-IDF and LSI-based document representations on this task.…”
Section: Expert Searchmentioning
confidence: 75%
“…We then derive the experts from the sets of authors of these papers using an approach where each retrieved paper contributes an exponentially weighted vote for an author, with a factor that reduces the bias towards highly prolific authors. Our experiments, described in detail in (Berger et al, 2020) show that these modern Transformer-based contextualized embeddings outperform TF-IDF and LSI-based document representations on this task.…”
Section: Expert Searchmentioning
confidence: 75%
“…In addition to a cross-validation evaluation, the authors also compared the extracted keyphrases of papers with the ones supplied by the papers' authors. The focus of [Berger et al, 2020] was on analysing Computer Science papers for academic expert retrieval. Their method relies on Sentence-BERT [Reimers and Gurevych, 2019] which is used on the concatenation of the title and the abstract to represent the paper.…”
Section: Related Workmentioning
confidence: 99%
“…The ability to identify similarity across documents in large scientific corpora is fundamental for many applications, including recommendation (Bhagavatula et al, 2018), exploratory or analogical search (Hope et al, 2017(Hope et al, , 2021bLissandrini et al, 2019), paper-reviewer matching (Mimno and McCallum, 2007;Berger et al, 2020) and many more uses.…”
Section: Introductionmentioning
confidence: 99%