2020 6th International Conference on Web Research (ICWR) 2020
DOI: 10.1109/icwr49608.2020.9122275
|View full text |Cite
|
Sign up to set email alerts
|

EmHash: Hashtag Recommendation using Neural Network based on BERT Embedding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(10 citation statements)
references
References 34 publications
0
10
0
Order By: Relevance
“…On the Internet, there is an abundance of content regarding the implementation of neural networks for text translation and summarization similar to this work [Kaviani and Rahmani 2020;Li et al 2016Li et al , 2019Yang et al 2019].…”
Section: Related Workmentioning
confidence: 80%
See 1 more Smart Citation
“…On the Internet, there is an abundance of content regarding the implementation of neural networks for text translation and summarization similar to this work [Kaviani and Rahmani 2020;Li et al 2016Li et al , 2019Yang et al 2019].…”
Section: Related Workmentioning
confidence: 80%
“…Over the last 3 years, research on hashtag recommendations has become increasingly common [Li et al 2016[Li et al , 2019Yang et al 2019] [Kaviani andRahmani 2020]. Our approach differs from other works primarily because we used a corpus of Ecommerce reviews instead of content from social media, we used classical and alternative metrics to quantify the results, and we used a novel approach to generate input for the BERT model.…”
Section: Related Workmentioning
confidence: 99%
“…For a query tweet, they calculated the distances between a query tweet and the centroids of the tweet clusters and extracted candidate hashtags from the closest cluster and put forward recommendations based on the hashtags' popularity values. With the increase in popularity of BERT, it has been used by Kaviani et al [40] in generating the embedding of tweets.…”
Section: Tweet Similarity Based Methodsmentioning
confidence: 99%
“…This model is trained with cross-entropy to maximize the likelihood of ground truth tags, and produces top-k tags given the [CLS] representation. We compare two models, BR-EF and BR-LFT, for our early fusion (EF) and late fusion (LF) taken by previous models (Weston et al, 2014;Gong and Zhang, 2016;Wu et al, 2018a;Zhang et al, 2019;Yang et al, 2020a;Kaviani and Rahmani, 2020), respectively. BR-EF takes the input C (Eq.…”
Section: Baselinesmentioning
confidence: 99%
“…A conventional approach to recommendation is a ranking scheme that returns top-k relevant target items for a given query or a user profile. As such, recommending tags (e.g., hashtags and labeled tags), which is the main focus of this paper, has been treated as a ranking problem (Weston et al, 2014;Gong and Zhang, 2016;Wu et al, 2018a;Wang et al, 2019;Zhang et al, 2019;Yang et al, 2020a;Kaviani and Rahmani, 2020). These approaches, however, neglect inter-dependency among the tags (see Figure 1)in a way conventional information retrieval techniques do for ranking.…”
Section: Introductionmentioning
confidence: 99%