2020 8th International Conference on Information and Communication Technology (ICoICT) 2020
DOI: 10.1109/icoict49345.2020.9166249
|View full text |Cite
|
Sign up to set email alerts
|

Academic Expert Finding in Indonesia using Word Embedding and Document Embedding: A Case Study of Fasilkom UI

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 16 publications
0
3
0
Order By: Relevance
“…In addition, since word embedding has dense representation, then it also solves the sparsity problem in the classic representation, which makes the computation more effective and efficient. Therefore, most of the current research in text processing uses word embedding as the word representation [35]- [38]. There are three variants of word embedding that are explored in this work: Word2Vec, FastText, and BERT.…”
Section: Word Embeddingmentioning
confidence: 99%
“…In addition, since word embedding has dense representation, then it also solves the sparsity problem in the classic representation, which makes the computation more effective and efficient. Therefore, most of the current research in text processing uses word embedding as the word representation [35]- [38]. There are three variants of word embedding that are explored in this work: Word2Vec, FastText, and BERT.…”
Section: Word Embeddingmentioning
confidence: 99%
“…These representations were evaluated for their quality in a word similarity task. The Word2Vec model has demonstrated its capability to address various tasks, including text summarization [26,27,28], ranking for academic expert finding [29,30], and text classification [31,32,33]. Several models similar to Word2Vec were subsequently introduced, including Glove (Global Vectors for Word Representation) [34] and FastText [35].…”
Section: A Text Embeddingmentioning
confidence: 99%
“…Different retrieval methods, such as: probabilistic BM25 [30], [31] and sequential dependence model (SDM) [32], [33] can be considered for further study. The possibility of combining semantic information [25], [34], [35] or social media [36], [37] into the retrieval model to address lexical mismatch problem is a great challenge to be explored.…”
Section: Analysis Of Subject Headings Overlaps Of Documents From Different Facultiesmentioning
confidence: 99%