Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3357866
|View full text |Cite
|
Sign up to set email alerts
|

Discovering Hypernymy in Text-Rich Heterogeneous Information Network by Exploiting Context Granularity

Abstract: Text-rich heterogeneous information networks (text-rich HINs) are ubiquitous in real-world applications. Hypernymy, also known as is-a relation or subclass-of relation, lays in the core of many knowledge graphs and benefits many downstream applications. Existing methods of hypernymy discovery either leverage textual patterns to extract explicitly mentioned hypernym-hyponym pairs, or learn a distributional representation for each term of interest based its context. These approaches rely on statistical signals f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 56 publications
(94 reference statements)
0
8
0
Order By: Relevance
“…Note that one can interpret DIH in various ways depending on how "contexts" are defined. For example, if "contexts" are defined as documents [45], then DIH states that: if a word (e.g., "Crawling") appears in a document, then its parent (e.g., "World Wide Web") is also expected to be in that document. In contrast, if "contexts" are defined based on the local context window (i.e., the previous and the latter words in a sequence) [46], then DIH becomes: if a context word 𝑐 occurs 𝑛 times in the context window of a child 𝑤 1 , then it is expected to occur no less than 𝑛 times in the context window of its parent 𝑤 2 .…”
Section: Hypernymy Regularizationmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that one can interpret DIH in various ways depending on how "contexts" are defined. For example, if "contexts" are defined as documents [45], then DIH states that: if a word (e.g., "Crawling") appears in a document, then its parent (e.g., "World Wide Web") is also expected to be in that document. In contrast, if "contexts" are defined based on the local context window (i.e., the previous and the latter words in a sequence) [46], then DIH becomes: if a context word 𝑐 occurs 𝑛 times in the context window of a child 𝑤 1 , then it is expected to occur no less than 𝑛 times in the context window of its parent 𝑤 2 .…”
Section: Hypernymy Regularizationmentioning
confidence: 99%
“…In contrast, if "contexts" are defined based on the local context window (i.e., the previous and the latter words in a sequence) [46], then DIH becomes: if a context word 𝑐 occurs 𝑛 times in the context window of a child 𝑤 1 , then it is expected to occur no less than 𝑛 times in the context window of its parent 𝑤 2 . DIH is a classic tool in constructing topic taxonomies [44,45], which motivates us to propose the following DIH-based regularization.…”
Section: Hypernymy Regularizationmentioning
confidence: 99%
“…word embeddings [9,27,31]) of terms. For a term pair ⟨x, y⟩, their embeddings are used for learning a binary classifier to predict whether it has the hypernymy relation [4,7,12,37]. As embeddings are directly learned from the corpora, distributed methods eliminate the needs of designing hand-crafted patterns and have shown strong performance.…”
Section: Related Workmentioning
confidence: 99%
“…CRIM [32] combines projection learning, trying to achieve a linear transformation which goes from hyponyms to hypernyms, with pattern-based extraction. It is also worth mentioning that recognising hypernymy represents a fundamental feature in taxonomy induction and enrichment [18,[33][34][35]. When evaluating our approach we compare ourselves with all the systems who took part in SemEval 2018 Hypernym Discovery task.…”
Section: Related Workmentioning
confidence: 99%