2020
DOI: 10.1109/access.2020.2969983
|View full text |Cite
|
Sign up to set email alerts
|

A Synchronized Word Representation Method With Dual Perceptual Information

Abstract: The information used for human natural language comprehension is usually perceptual information, such as text, sounds, and images. In recent years, language models that learn semantics from single perceptual information sources (text) have gradually developed into multimodal language models that learn semantics from multiple perceptual information sources. Sound is perceptual information other than text that has been proven effective by many related works. However, there is still a need for further research on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 32 publications
0
3
0
Order By: Relevance
“…The distributed representation method, also known as word embedding, inputs the words in the text into a pre-trained model for training, and then converts it into a continuous dense vector. This method uses the distribution assumption of words to represent texts, that is, words with similar contexts are described as similar semantics so that words with similar contexts are close in the semantic space (Zhu et al, 2020). The Word2Vec and Paragraph Vector models in the word distribution representation are introduced below, as shown in Figure 1.…”
Section: Text Representation Methods In Text Miningmentioning
confidence: 99%
“…The distributed representation method, also known as word embedding, inputs the words in the text into a pre-trained model for training, and then converts it into a continuous dense vector. This method uses the distribution assumption of words to represent texts, that is, words with similar contexts are described as similar semantics so that words with similar contexts are close in the semantic space (Zhu et al, 2020). The Word2Vec and Paragraph Vector models in the word distribution representation are introduced below, as shown in Figure 1.…”
Section: Text Representation Methods In Text Miningmentioning
confidence: 99%
“…It trained written embedding based on phonetic embedding and the final word representation fuses writing and phonetic embedding. W. Zhu et al [63] use a synchronized way that adopts an attention model to utilize both text and phonetic perceptual information in unsupervised learning tasks. In terms of the two types of models discussed in this section, MSP belongs to separate training model.…”
Section: B Separate Training Modelsmentioning
confidence: 99%
“…DCWE [58] is enhanced double-carrier word representation model via phonetics and writing, and it trained written representation based on phonetic representation and the final word representation fuses text and phonetic embedding. DPWR [63] is trained in a synchronized way that adopts an attention model to utilize both linguistic and phonetic information in unsupervised learning tasks. SynGCN [57] incorporates syntactic and semantic information in word embeddings by using graph convolutional networks.…”
Section: A Baseline Algorithmsmentioning
confidence: 99%