2023
DOI: 10.3390/math11234780
|View full text |Cite
|
Sign up to set email alerts
|

REKP: Refined External Knowledge into Prompt-Tuning for Few-Shot Text Classification

Yuzhuo Dang,
Weijie Chen,
Xin Zhang
et al.

Abstract: Text classification is a machine learning technique employed to assign a given text to predefined categories, facilitating the automatic analysis and processing of textual data. However, an important problem is that the number of new text categories is growing faster than that of human annotation data, which makes many new categories of text data lack a lot of annotation data. As a result, the conventional deep neural network is forced to over-fit, which damages the application in the real world. As a solution… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 38 publications
0
0
0
Order By: Relevance
“…Then, the one-class support vector machine-based unlabeled data leveraging (OCSVM-UDL) module to establish an OCSVM model in the trained text vector space, selects reasonable pseudo-label data for each category from a large amount of unlabeled data. Dang et al [22] propose to enhance the verbalizer and construct the refined external knowledge into a prompt-tuning (REKP) model. They employ external knowledge bases to increase the mapping space of tagged terms and design three refinement methods to remove noise data.…”
Section: Table Qa On General Domainsmentioning
confidence: 99%
“…Then, the one-class support vector machine-based unlabeled data leveraging (OCSVM-UDL) module to establish an OCSVM model in the trained text vector space, selects reasonable pseudo-label data for each category from a large amount of unlabeled data. Dang et al [22] propose to enhance the verbalizer and construct the refined external knowledge into a prompt-tuning (REKP) model. They employ external knowledge bases to increase the mapping space of tagged terms and design three refinement methods to remove noise data.…”
Section: Table Qa On General Domainsmentioning
confidence: 99%
“…Recently, with the availability of large-scale CLS datasets (Zhu et al, 2019;Ladhak et al, 2020;Perez-Beltrachini and Lapata, 2021;Wang et al, 2022a;Zheng et al, 2022), many researchers shift the research attention to end-to-end CLS models. According to a comprehensive CLS review (Wang et al, 2022b), the end-to-end models involve multi-task learning (Cao et al, 2020;Bai et al, 2021b;Liang et al, 2022b), knowledge distillation (Nguyen and Luu, 2022), resourceenhanced Jiang et al, 2022) and pre-training Chi et al, 2021) frameworks. However, none of them explore LLMs performance on CLS.…”
Section: Cross-lingual Summarizationmentioning
confidence: 99%
“…Previous works have explored to incorporate control * Work done during internship at I 2 R, A*STAR. signals during pre-training (Keskar et al, 2019), task-specific fine-tuning , and prompt tuning (Zhang et al, 2022b). Meanwhile, the advancements in LLMs have unveiled new paradigms.…”
Section: Introductionmentioning
confidence: 99%