2023
DOI: 10.1109/access.2023.3260771
|View full text |Cite
|
Sign up to set email alerts
|

Active Learning Based on Transfer Learning Techniques for Text Classification

Abstract: Text preprocessing is a common task in machine learning applications that involves handlabeling sets. Although automatic and semi-automatic annotation of text data is a growing field, researchers need to develop models that use resources as efficiently as possible for a learning task. The goal of this work was to learn faster with fewer resources. In this paper, the combination of active and transfer learning was examined with the purpose of developing an effective text categorization method. These two forms o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0
1

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 30 publications
(34 reference statements)
0
3
0
1
Order By: Relevance
“…To address the problem of scarce training data, Gikunda [21] et al proposed a model parameter migration framework, which provides near-optimal training capabilities for classification models using only a small set of samples through a heuristic combination of migration learning and deep learning methods. Onita [22] investigates the combination of active and transfer learning for text categorization, using three training point selection criteria to complete the learning task as efficiently as possible using fewer resources. Garcia [23] et al proposed a multi-source task migration learning method to address the problem of high time cost of model training by applying a correlation convolution kernel to match a target model with a similar number of parameters as the source model and reducing the cost of source model training by pretraining the target model for parameter migration.…”
Section: A Migration Learningmentioning
confidence: 99%
“…To address the problem of scarce training data, Gikunda [21] et al proposed a model parameter migration framework, which provides near-optimal training capabilities for classification models using only a small set of samples through a heuristic combination of migration learning and deep learning methods. Onita [22] investigates the combination of active and transfer learning for text categorization, using three training point selection criteria to complete the learning task as efficiently as possible using fewer resources. Garcia [23] et al proposed a multi-source task migration learning method to address the problem of high time cost of model training by applying a correlation convolution kernel to match a target model with a similar number of parameters as the source model and reducing the cost of source model training by pretraining the target model for parameter migration.…”
Section: A Migration Learningmentioning
confidence: 99%
“…CNNs are well suited for multi-task learning, in which a single model is trained to carry out several related applications concurrently [162]- [177]. The need for large amounts of labeled data for each individual task is being reduced as researchers investigate ways to take advantage of shared representations across applications and enhance generalization by transferring knowledge learned from one task to another [147]- [161].…”
Section: E Multi-task Learning and Transfer Learningmentioning
confidence: 99%
“…No obstante, etiquetar estos contenidos manualmente requiere de considerable esfuerzo humano y tiempo. Aquí es donde la tecnología contemporánea cobra relevancia, impulsando el desarrollo de herramientas y programas que, mediante la inteligencia artificial, automatizan este proceso [2], [3], [4], [5], [6].…”
Section: Introductionunclassified