2019
DOI: 10.1109/access.2019.2909578
|View full text |Cite
|
Sign up to set email alerts
|

Sense-Based Topic Word Embedding Model for Item Recommendation

Abstract: As a useful way to help users filter information and save time, item recommendation intends to recommend new items to users who tend to be interested. As the most common format related to items in online social networks, short texts have always been disregarded by previous research on item recommendation. The sparse features and insufficient information in short texts render the extraction of features from short texts difficult. To address the problems of short text feature extraction and item recommendation, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 37 publications
0
6
0
Order By: Relevance
“…Pre-trained embedding models, like Word2Vec and BERT, perform well when applied to datasets of Urdu tweets, demonstrating their effectiveness in classifying the text into useful topics (Nasim 2020 ). For Chinese and English language datasets, a topic modeling based item recommendation approach using sense-based embedding obtains the smallest RMSE of 0.0697 (Xiao et al 2019 ). Software vulnerability identification from a vast corpus using domain-specific word embedding achieves 82% accuracy in identifying admitted coding errors (Flisar and Podgorelec 2019 ).…”
Section: Review On Text Analytics Word Embedding Application and Deep...mentioning
confidence: 99%
See 1 more Smart Citation
“…Pre-trained embedding models, like Word2Vec and BERT, perform well when applied to datasets of Urdu tweets, demonstrating their effectiveness in classifying the text into useful topics (Nasim 2020 ). For Chinese and English language datasets, a topic modeling based item recommendation approach using sense-based embedding obtains the smallest RMSE of 0.0697 (Xiao et al 2019 ). Software vulnerability identification from a vast corpus using domain-specific word embedding achieves 82% accuracy in identifying admitted coding errors (Flisar and Podgorelec 2019 ).…”
Section: Review On Text Analytics Word Embedding Application and Deep...mentioning
confidence: 99%
“… Zhu et al ( 2020a ) Multimodal word representation model to understand syntactic and phonetic information ESSLI dataset, WordSim-353, WS-240, 296, SemEval-2012, IMDB, Yelp reviews datasets Multimodal model Word2Vec, GloVe The multimodal word representation model achieves an accuracy of 78.23% 6. Xiao et al ( 2019 ) Recommendations based on user preferences and sense-based word embedding approach on short Chinese text messages Social network dataset in the Chinese language, English dataset from Wikipedia, Hownet database Time-aware probabilistic model Sense-based word embedding The proposed model is efficiently recommended based on the combined outcome of sense-based embedding, feature selection using topic modeling 7. Flisar and Podgorelec ( 2019 ) Software flawed identification Source code comments extracted from open source java projects from GitHub NB, SVM Word2Vec, DSWE DSWE achieves an accuracy of 82% 8.…”
Section: Appendix Amentioning
confidence: 99%
“…Niu et al [26] proposed the Sememe-Encoded Word Representation Learning (SE-WRL) algorithm utilizing sememe information in HowNet [27] to capture the exact meanings of a Chinese word within specific contexts and further learn sense-specific word representations. Xiao et al [28] improved the SE-WRL model by jointly using the definition words and the meaningful hyponym words to learn sense based word embeddings. The sense embeddings and the word embeddings are simultaneously learned using the Skip-gram framework of Word2vec.…”
Section: Related Workmentioning
confidence: 99%
“…As an example, we select the Chinese Open Word-Net [41] as the sense inventory and learn Chinese word single-meaning embeddings from the SogouCS 3 corpus. We compare our algorithm with the CBOW model, GloVe model, FastText model, SE-WRL [26] and the improved SE-WRL [28]. There are different strategies to utilize sememe information in the SE-WRL, and we choose the Sememe Attention over Target (SAT) model as a baseline.…”
Section: ) Wsme Models Other Than Englishmentioning
confidence: 99%
“…In order to improve the effect of topic detection, a lot of work has been done from the original vector space model to LDA model [34], [35]. It even extends the semantic knowledge beyond the video to the text information in the video [36], [37]. However, the sparsity of text data caused by the noisy semantic information and the poor robustness of NDKbased features caused by visual similarity detection errors have become an important bottleneck for effective event mining.…”
Section: Model Of Multiple Correspondence Analysis (Mca)mentioning
confidence: 99%