Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022) 2022
DOI: 10.18653/v1/2022.semeval-1.1
|View full text |Cite
|
Sign up to set email alerts
|

Semeval-2022 Task 1: CODWOE – Comparing Dictionaries and Word Embeddings

Abstract: Word embeddings have advanced the state of the art in NLP across numerous tasks. Understanding the contents of dense neural representations is of utmost interest to the computational semantics community. We propose to focus on relating these opaque word vectors with human-readable definitions, as found in dictionaries. This problem naturally divides into two subtasks: converting definitions into embeddings, and converting embeddings into definitions. This task was conducted in a multilingual setting, using com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 14 publications
(22 citation statements)
references
References 4 publications
(8 reference statements)
0
12
0
Order By: Relevance
“…Reverse dictionary results are evaluated using three metrics: mean squared error (MSE) between the reconstructed and reference embeddings, cosine similarity (COS) between the reconstructed embedding and the reference embedding, and the cosinebased ranking (RANK) between the reconstructed and reference embeddings, measuring the number of other test items having higher cosine with the reconstructed embedding than with the reference embedding (Mickus et al, 2022).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Reverse dictionary results are evaluated using three metrics: mean squared error (MSE) between the reconstructed and reference embeddings, cosine similarity (COS) between the reconstructed embedding and the reference embedding, and the cosinebased ranking (RANK) between the reconstructed and reference embeddings, measuring the number of other test items having higher cosine with the reconstructed embedding than with the reference embedding (Mickus et al, 2022).…”
Section: Resultsmentioning
confidence: 99%
“…Our paper is devoted to the performance comparison of different neural network structures, multilingual and multitask tricks, and elaborating on language-agnostic or bidirectional structure helpfulness. The competition (Mickus et al, 2022) has significant potential in contributing pretraining process acceleration, low-resource language model development, and commonsense using. Furthermore, the task is of high importance for explainable AI and natural language processing since it models direct mapping from human-readable data to machine-readable data.…”
Section: Introductionmentioning
confidence: 99%
“…In this task, the performance of the system is evaluated through three evaluation indicators (Mickus et al, 2022). Mean squared error (MSE) between the submission's reconstructed embedding and the reference embedding.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The CODWOE (Comparison of Word Glosses and Word Embeddings) task at SemEval-2022 (Mickus et al, 2022) encouraged participants to analyze the relation between two types of semantic descriptions, word embeddings and dictionary glosses, by proposing two sub-tasks: Reverse Dictionary (RD) (Hill et al, 2016), in which participants must generate vectors from glosses, and Definition Modeling (DM) (Noraset et al, 2017), in which participants must generate glosses from vectors. These subtasks aim to be useful for explainable Artificial Intelligence (AI) by including human-readable and machine-readable data.…”
Section: Introductionmentioning
confidence: 99%