2021
DOI: 10.1016/j.ipm.2020.102455
|View full text |Cite
|
Sign up to set email alerts
|

A neural topic model with word vectors and entity vectors for short texts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 41 publications
(15 citation statements)
references
References 18 publications
0
13
0
Order By: Relevance
“…Word vectors are all words, so word vectors can form document vectors. In general, the training of the word vector is carried out first, and then the word vector is expressed as a text vector by means of summation [21]. e word2vec model using skip-gram is used for text representation.…”
Section: Construction Of Forecasting Model Using Financial Datamentioning
confidence: 99%
“…Word vectors are all words, so word vectors can form document vectors. In general, the training of the word vector is carried out first, and then the word vector is expressed as a text vector by means of summation [21]. e word2vec model using skip-gram is used for text representation.…”
Section: Construction Of Forecasting Model Using Financial Datamentioning
confidence: 99%
“…The authors test on the Twenty Newsgroups and a NIPS data set. Also in 2021, Zhao et al introduced another neural topic model that incorporates word embedding vectors and entity vectors into the model [96] (VAETM). Word vectors have been incorporated in both neural and unsupervised topic models.…”
Section: Meta-data Augmented Supervised and Reinforcement Learning Ba...mentioning
confidence: 99%
“…In [ 46 ], the authors aggregated short texts into long documents and incorporated document embedding to provide word co-occurrence information. In [ 47 ], a variational autoencoder topic model (VAETM) and a supervised version (SVAETM) of it have been proposed by combining embedded representations of words and entities by employing an external corpus. To enhance contextual information, the authors in [ 48 ] proposed a graph neural network as the encoder of NTM, which accepts a bi-term graph of the words as inputs and produces the topic distribution of the corpus as the output.…”
Section: Neural-topic Models and Related Workmentioning
confidence: 99%