Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1163
|View full text |Cite
|
Sign up to set email alerts
|

Jointly Learning Word Representations and Composition Functions Using Predicate-Argument Structures

Abstract: We introduce a novel compositional language model that works on PredicateArgument Structures (PASs). Our model jointly learns word representations and their composition functions using bagof-words and dependency-based contexts. Unlike previous word-sequencebased models, our PAS-based model composes arguments into predicates by using the category information from the PAS. This enables our model to capture longrange dependencies between words and to better handle constructs such as verbobject and subject-verb-ob… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

4
48
1

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 32 publications
(53 citation statements)
references
References 15 publications
(23 reference statements)
4
48
1
Order By: Relevance
“…Hashimoto et al (2014) learned the word embedding using predicate-argument structure contexts and used it to measure semantic similarity between short phrases. In their methods, the syntactic information is introduced by constructing the syntactic contexts instead of the normal linear contexts (i.e.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Hashimoto et al (2014) learned the word embedding using predicate-argument structure contexts and used it to measure semantic similarity between short phrases. In their methods, the syntactic information is introduced by constructing the syntactic contexts instead of the normal linear contexts (i.e.…”
Section: Methodsmentioning
confidence: 99%
“…In contrast to previous syntactic context word embeddings (Hashimoto et al , 2014; Levy and Goldberg, 2014), our embedding is learned only based on the concise syntax word sequence, which represents a sentence’s syntactic structure while discards the less important words. It is simple but proven to be effective for DDI extraction by our experiments.…”
Section: Methodsmentioning
confidence: 99%
“…Hence, the word cause has two embeddings: one in N and another in W. In general cause is used as a noun and a verb, and thus we expect the noun embeddings to capture the meanings focusing on their noun usage. This is inspired by some recent work on word representations that explicitly assigns an independent representation for each word usage according to its partof-speech tag (Baroni and Zamparelli, 2010;Grefenstette and Sadrzadeh, 2011;Hashimoto et al, 2013;Hashimoto et al, 2014;Kartsaklis and Sadrzadeh, 2013).…”
Section: Learning Word Embeddingsmentioning
confidence: 99%
“…(Milajevs et al, 2014) 0.46 (Polajnar et al, 2015) 0.35 (Hashimoto et al, 2014) 0.48 (Hashimoto and Tsuruoka, 2015) 0.48 Human agreement 0.75 Table 4: Spearman correlation for transitive expressions using the benchmark by Grefenstette and Sadrzadeh (2011) Table 4 shows the Spearman's correlation values (ρ) obtained by all the different versions built from our model WN. The best score was achieved by averaging the head and dependent similarity values derived from the n-vn (right-to-left) strategy.…”
Section: Noun-verb-noun Compositionmentioning
confidence: 99%