2020
DOI: 10.1609/aaai.v34i05.6235
|View full text |Cite
|
Sign up to set email alerts
|

Understanding the Semantic Content of Sparse Word Embeddings Using a Commonsense Knowledge Base

Abstract: Word embeddings have developed into a major NLP tool with broad applicability. Understanding the semantic content of word embeddings remains an important challenge for additional applications. One aspect of this issue is to explore the interpretability of word embeddings. Sparse word embeddings have been proposed as models with improved interpretability. Continuing this line of research, we investigate the extent to which human interpretable semantic concepts emerge along the bases of sparse word representatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Another crucial aspect that we carefully investigate in this paper is the integration of sparse contextualized word representations into cross-lingual zero-shot WSD. Sparse word representations have a demonstrated ability to align with word senses (Balogh et al, 2020;Yun et al, 2021). While the benefits of employing sparsity has been shown for WSD in English (Berend, 2020a), its viability in the cross-lingual setting has not yet been verified.…”
Section: Introductionmentioning
confidence: 99%
“…Another crucial aspect that we carefully investigate in this paper is the integration of sparse contextualized word representations into cross-lingual zero-shot WSD. Sparse word representations have a demonstrated ability to align with word senses (Balogh et al, 2020;Yun et al, 2021). While the benefits of employing sparsity has been shown for WSD in English (Berend, 2020a), its viability in the cross-lingual setting has not yet been verified.…”
Section: Introductionmentioning
confidence: 99%
“…Sparse representations convey the encoded semantic information in a more explicit manner, which facilitates the interpretability of such representations (Murphy et al, 2012;Balogh et al, 2020). Feature norming studies also illustrated the sparse nature of human feature descriptions, i.e.…”
Section: Introductionmentioning
confidence: 99%