2019 International Conference on Document Analysis and Recognition Workshops (ICDARW) 2019
DOI: 10.1109/icdarw.2019.40090
|View full text |Cite
|
Sign up to set email alerts
|

Word Embeddings in Low Resource Gujarati Language

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 3 publications
0
2
0
Order By: Relevance
“…The Word embeddings are distributed representations of text in an n-dimensional space. Every word in the vocabularies, [12] is associated with ndimensional vector of real numbers. The main goal is to have the vectors reduce the dimensionality of text while still capturing and maintaining meaning and relationships between words.…”
Section: Introductionmentioning
confidence: 99%
“…The Word embeddings are distributed representations of text in an n-dimensional space. Every word in the vocabularies, [12] is associated with ndimensional vector of real numbers. The main goal is to have the vectors reduce the dimensionality of text while still capturing and maintaining meaning and relationships between words.…”
Section: Introductionmentioning
confidence: 99%
“…To support scenarios where using huge amounts of data and computational resources is not feasible, it is important to continue developing our understanding of context-independent word embeddings, such as word2vec (Mikolov et al, 2013) and GloVe (Pennington et al, 2014). These algorithms continue to be used in a wide variety of situations, including the computational humanities (Abdulrahim, 2019;Hellrich et al, 2019) and languages where only small corpora are available (Joshi et al, 2019).…”
Section: Introductionmentioning
confidence: 99%