2020
DOI: 10.1109/taslp.2020.2980152
|View full text |Cite
|
Sign up to set email alerts
|

Prior Knowledge Driven Label Embedding for Slot Filling in Natural Language Understanding

Abstract: Traditional slot filling in natural language understanding (NLU) predicts a one-hot vector for each word. This form of label representation lacks semantic correlation modelling, which leads to severe data sparsity problem, especially when adapting an NLU model to a new domain. To address this issue, a novel label embedding based slot filling framework is proposed in this paper. Here, distributed label embedding is constructed for each slot using prior knowledge. Three encoding methods are investigated to incor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 50 publications
(84 reference statements)
0
6
0
Order By: Relevance
“…[ Zhu et al 2020] noticed that the data sparsity problem can also occur with labels because labels are usually encoded using one-hot vectors, so they also proposed a label embedding which is constructed using prior knowledge including atomic concepts, slot descriptions, and slot exemplars. An atomic concept assumes that each slot can be represented as a set of atoms.…”
Section: Low Resource Data Setsmentioning
confidence: 99%
“…[ Zhu et al 2020] noticed that the data sparsity problem can also occur with labels because labels are usually encoded using one-hot vectors, so they also proposed a label embedding which is constructed using prior knowledge including atomic concepts, slot descriptions, and slot exemplars. An atomic concept assumes that each slot can be represented as a set of atoms.…”
Section: Low Resource Data Setsmentioning
confidence: 99%
“…[259] utilizes attention mechanism to build a graph generation model based on visual features and knowledge graphs, [191], etc. Zhu et al [280] and provide relation propagation features for FSL. [224] regresses the visual features under the knowledge graph representation before performing ZSL classification.…”
Section: Leveraging Structured Data For Lslmentioning
confidence: 99%
“…In these studies, some utilize an ontology directly as the model output [117], and some use the similarity [31,85] or matching degree [129] between the standard feature and the knowledge feature as the learning target. [280] leverages structured knowledge to customize embedding slot labels for natural language understanding (NLU) tasks. [228] takes word embeddings and knowledge graph as inputs and outputs a visual classifier for each category.…”
Section: Leveraging Structured Data For Lslmentioning
confidence: 99%
See 1 more Smart Citation
“…Traditional supervised learning methods have made great achievements with a large amount of labeled data Mesnil et al, 2015;Hakkani-Tür et al, 2016;Kurata et al, 2016;Liu and Lane, 2016;Goo et al, 2018;E et al, 2019). However, there is little or even no labeled data for a new task, the cross-domain slot filling task which uses labeled data in source tasks to training model for target task is gaining increasing attention (Yazdani and Henderson, 2015;Bapna et al, 2017;Zhu and Yu, 2018;Lee and Jha, 2019;Shah et al, 2019;Liu et al, 2020;Zhu et al, 2020). There are mainly two streams of methods in previous work.…”
Section: Related Workmentioning
confidence: 99%