Proceedings of the 12th International Workshop on Semantic Evaluation 2018
DOI: 10.18653/v1/s18-1170
|View full text |Cite
|
Sign up to set email alerts
|

UMD at SemEval-2018 Task 10: Can Word Embeddings Capture Discriminative Attributes?

Abstract: We describe the University of Maryland's submission to SemEval-018 Task 10, "Capturing Discriminative Attributes": given word triples (w 1 , w 2 , d), the goal is to determine whether d is a discriminating attribute belonging to w 1 but not w 2 . Our study aims to determine whether word embeddings can address this challenging task. Our submission casts this problem as supervised binary classification using only word embedding features. Using a gaussian SVM model trained only on validation data results in an F-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…(No expl. ) 0.69 (Attia et al, 2018) Google 5 grams and Word2Vec embeddings as features for feedforward neural network None 0.67 (Zhou et al, 2018) Ensemble ML model with WordNet, PMI scores, Word2Vec, and GloVe embeddings None 0.67 (Kulmizev et al, 2018) A combination of GloVe and Paragram embeddings None 0.67 (Zhang and Carpuat, 2018) SVM with GloVe embeddings None 0.67 (Vinayan et al, 2018) CNN with GloVe embeddings None 0.66 (Grishin, 2018) Similarity calculations using a combination of DSMs None 0.65 Word2Vec, GloVe, and FastText embeddings as features for MLP-CNN None 0.63 (Gamallo, 2018) Dependency parsing and co-occurrence analysis Transp. (No expl.)…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…(No expl. ) 0.69 (Attia et al, 2018) Google 5 grams and Word2Vec embeddings as features for feedforward neural network None 0.67 (Zhou et al, 2018) Ensemble ML model with WordNet, PMI scores, Word2Vec, and GloVe embeddings None 0.67 (Kulmizev et al, 2018) A combination of GloVe and Paragram embeddings None 0.67 (Zhang and Carpuat, 2018) SVM with GloVe embeddings None 0.67 (Vinayan et al, 2018) CNN with GloVe embeddings None 0.66 (Grishin, 2018) Similarity calculations using a combination of DSMs None 0.65 Word2Vec, GloVe, and FastText embeddings as features for MLP-CNN None 0.63 (Gamallo, 2018) Dependency parsing and co-occurrence analysis Transp. (No expl.)…”
Section: Discussionmentioning
confidence: 99%
“…With regard to interpretability and explainability we can classify IDA approaches into three categories. Frequency-based models over textbased features, heavily relying on textual features and frequency-based methods (Gamallo, 2018;González et al, 2018) ; ML over Textual features (Dumitru et al, 2018;Sommerauer et al, 2018;King et al, 2018;Mao et al, 2018) and ML over dense vectors and textual features (Brychcín et al, 2018;Attia et al, 2018;Dumitru et al, 2018;Arroyo-Fernández et al, 2018;Speer and Lowry-Duda, 2018;Santus et al, 2018;Grishin, 2018;Zhou et al, 2018;Vinayan et al, 2018;Kulmizev et al, 2018;Zhang and Carpuat, 2018;Shiue et al, 2018). While the first category concentrates on models with higher interpretability, none of these models provide explanations.…”
Section: Related Workmentioning
confidence: 99%
“…For example, the word "buckle" is a discriminative feature in the triplet ("seat belt", "tires", "buckle") that characterizes the first concept but not the second. Researchers have formulated this property as a binary classification task and proposed machine learning and similarity-based methods to evaluate the word embeddings (Zhang and Carpuat, 2018;Dumitru et al, 2018;Grishin, 2018). However, to perform these evaluations for a domain-specific small corpus, we would need a manually curated set of discriminative (positive) and non-discriminative (negative) triples, which can be costly and timeconsuming to curate.…”
Section: Introductionmentioning
confidence: 99%