2023
DOI: 10.1016/j.neunet.2023.02.045
|View full text |Cite
|
Sign up to set email alerts
|

A meta-framework for multi-label active learning based on deep reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 81 publications
(78 reference statements)
0
6
0
Order By: Relevance
“…2. With margin sampling [33], unlabeled samples are sorted in ascending ord on the margin between the model's top two predicted class probabilities. sample 𝑥 , the class probability 𝑝(𝑦 = 𝑗|𝑥 ; 𝑊) is computed for every clas c i )", you should organize the unannotated data samples by sorting them in ascending order according to their cess, we update the labels (𝑦 ) for unlabeled sampl 𝑊, iteratively.…”
Section: Choosing Samples With Uncertaintymentioning
confidence: 99%
See 2 more Smart Citations
“…2. With margin sampling [33], unlabeled samples are sorted in ascending ord on the margin between the model's top two predicted class probabilities. sample 𝑥 , the class probability 𝑝(𝑦 = 𝑗|𝑥 ; 𝑊) is computed for every clas c i )", you should organize the unannotated data samples by sorting them in ascending order according to their cess, we update the labels (𝑦 ) for unlabeled sampl 𝑊, iteratively.…”
Section: Choosing Samples With Uncertaintymentioning
confidence: 99%
“…2. With margin sampling [33], unlabeled samples on the margin between the model's top two p sample 𝑥 , the class probability 𝑝(𝑦 = 𝑗|𝑥 ; 𝑊 c i values, which are calculated as follows: outputs 0. 𝑊 represents the CNN parameters.…”
Section: Choosing Samples With Uncertaintymentioning
confidence: 99%
See 1 more Smart Citation
“…In real-world applications, data with unique and correct label is often too costly to obtain (Zhou 2018;Li, Guo, and Zhou 2019;Wang, Yang, and Li 2020;Liu et al 2023;Chen et al 2023). Instead, users with varying knowledge and cultural backgrounds tend to annotate the same image with different labels.…”
Section: Introductionmentioning
confidence: 99%
“…Gao K et al introduced M-L to improve the classification of spectral images, particularly in scenarios with small sample sets [8]. Chen L et al analyzed M-L's over-parameterization in limited supervised data learning, proposing a two-layer structure to address the issue of overfitting [9]. Xu Z's team introduced a general M-L method to adapt to new tasks, simplifying the inequality between various tasks [10].…”
Section: Introductionmentioning
confidence: 99%