2016
DOI: 10.1007/978-3-319-32025-0_29
|View full text |Cite
|
Sign up to set email alerts
|

Benchmarking Semantic Capabilities of Analogy Querying Algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2017
2017
2017
2017

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 15 publications
0
1
0
Order By: Relevance
“…In addition to those already cited, numerous other recent papers have evaluated word embeddings by benchmarking on analogy questions (Mikolov et al, 2013b;Garten et al, 2015;Lofi et al, 2016). There is some consensus regarding performance across question types: systems do well on questions of inflectional morphology (especially so for English (Nicolai et al, 2015)), but far less reliably so for various non-geographical semantic questions-although some gains in performance are possible by adjusting the embedding algorithms used or their hyperparameters (Levy et al, 2015), or by training further on subproblems (Drozd et al, 2016).…”
Section: Accounting For Analogy Performancementioning
confidence: 99%
“…In addition to those already cited, numerous other recent papers have evaluated word embeddings by benchmarking on analogy questions (Mikolov et al, 2013b;Garten et al, 2015;Lofi et al, 2016). There is some consensus regarding performance across question types: systems do well on questions of inflectional morphology (especially so for English (Nicolai et al, 2015)), but far less reliably so for various non-geographical semantic questions-although some gains in performance are possible by adjusting the embedding algorithms used or their hyperparameters (Levy et al, 2015), or by training further on subproblems (Drozd et al, 2016).…”
Section: Accounting For Analogy Performancementioning
confidence: 99%