2020 IEEE 36th International Conference on Data Engineering (ICDE) 2020
DOI: 10.1109/icde48307.2020.00191
|View full text |Cite
|
Sign up to set email alerts
|

Collective Entity Alignment via Adaptive Features

Abstract: Entity alignment (EA) identifies entities that refer to the same real-world object but locate in different knowledge graphs (KGs), and has been harnessed for KG construction and integration. When generating EA results, current embeddingbased solutions treat entities independently and fail to take into account the interdependence between entities. In addition, most of embedding-based EA methods either fuse different features on representation-level and generate unified entity embedding for alignment, which pote… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
68
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

3
6

Authors

Journals

citations
Cited by 85 publications
(68 citation statements)
references
References 30 publications
0
68
0
Order By: Relevance
“…Concretely, we use: (1) Lev, which aligns entities using Levenshtein distance [37], a string metric for measuring the difference between two sequences; and (2) Embed, which aligns entities according to the cosine similarity between the name embeddings (averaged word embedding) of two entities. Following [65], we use the pre-trained fastText embeddings [2] as word embeddings, and for multilingual KG pairs, we use the MUSE word embeddings [10].…”
Section: Datasetsmentioning
confidence: 99%
“…Concretely, we use: (1) Lev, which aligns entities using Levenshtein distance [37], a string metric for measuring the difference between two sequences; and (2) Embed, which aligns entities according to the cosine similarity between the name embeddings (averaged word embedding) of two entities. Following [65], we use the pre-trained fastText embeddings [2] as word embeddings, and for multilingual KG pairs, we use the MUSE word embeddings [10].…”
Section: Datasetsmentioning
confidence: 99%
“…State-of-the-art EA solutions [9][10][11][12] assume that equivalent entities usually possess similar neighboring information.…”
Section: Examplementioning
confidence: 99%
“…( 2) Word-level: These methods average the pre-trained entity name vectors to construct the initial features: GM-Align , RDGCN (Wu et al, 2019a), HGCN (Wu et al, 2019b), DAT (Zeng et al, 2020b), DGMC (Fey et al, 2020). (3) Char-level: These EA methods further adopt the char-level textual features: At-trGNN , CEA (Zeng et al, 2020a), EPEA . For our proposed method, SEU(word) and SEU(char) represent the model only using the word and char features as the inputs, respectively.…”
Section: Baselinesmentioning
confidence: 99%