Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1616
|View full text |Cite
|
Sign up to set email alerts
|

Learning Representation Mapping for Relation Detection in Knowledge Base Question Answering

Abstract: Relation detection is a core step in many natural language process applications including knowledge base question answering. Previous efforts show that single-fact questions could be answered with high accuracy. However, one critical problem is that current approaches only get high accuracy for questions whose relations have been seen in the training data. But for unseen relations, the performance will drop rapidly. The main reason for this problem is that the representations for unseen relations are missing. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 22 publications
(13 citation statements)
references
References 31 publications
0
12
0
Order By: Relevance
“…where the proposed adapter G l (•) is a simple MLP. Mikolov, Le, and Sutskever (2013) and Wu et al (2019) pointed out the representation space of similar languages can be transferred by a linear mapping. In our scenario, which is in same language, the mapping function can transfer the general representation to task-specific representation effectively.…”
Section: Dynamic Fusion Mechanismmentioning
confidence: 99%
“…where the proposed adapter G l (•) is a simple MLP. Mikolov, Le, and Sutskever (2013) and Wu et al (2019) pointed out the representation space of similar languages can be transferred by a linear mapping. In our scenario, which is in same language, the mapping function can transfer the general representation to task-specific representation effectively.…”
Section: Dynamic Fusion Mechanismmentioning
confidence: 99%
“…This setting is an instance of domain adaptation, where a model is trained on data S, which is drawn according a source distribution, and tested on data T coming from a different target distribution. Domain adaptation over KG domains is more challenging compared to domain adaptation over single KG relations [Yu et al, 2017, Wu et al, 2019, because it is less likely for relations with similar lexicalizations to appear in the training set.…”
Section: Problem Statementmentioning
confidence: 99%
“…In question answering, when KB evolves, the questions related to the new knowledge should be readily answered, as illustrated in Figure 1. However, it is difficult for a typical KBQA model to answer these questions because it has no ability to detect the relations that are not available in training (Wu et al 2019). To answer these questions, we can certainly re-train a new robust KBQA model over the entire data once the additional KB knowledge comes.…”
Section: Introductionmentioning
confidence: 99%