2020 IEEE International Conference on Big Data and Smart Computing (BigComp) 2020
DOI: 10.1109/bigcomp48618.2020.00012
|View full text |Cite
|
Sign up to set email alerts
|

Analyzing Deep Neural Networks with Noisy Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 6 publications
0
2
0
Order By: Relevance
“…The diagnosis of MDD depends on the subjective evaluation of nine different symptoms and as little as one symptom may overlap between two patients [32], comorbidity is common, and symptoms may overlap with other disorders [33], leading to low interrater reliability of the diagnosis [34]. Such uncertainty associated with the diagnosis can obscure the relationship between a patient's data and the category it belongs to [35][36][37][38], and thereby decrease accuracy [39]. Datadriven definition of the disorder and the use of biotypes could help arrive at more homogeneous psychiatric groups.…”
Section: Discussionmentioning
confidence: 99%
“…The diagnosis of MDD depends on the subjective evaluation of nine different symptoms and as little as one symptom may overlap between two patients [32], comorbidity is common, and symptoms may overlap with other disorders [33], leading to low interrater reliability of the diagnosis [34]. Such uncertainty associated with the diagnosis can obscure the relationship between a patient's data and the category it belongs to [35][36][37][38], and thereby decrease accuracy [39]. Datadriven definition of the disorder and the use of biotypes could help arrive at more homogeneous psychiatric groups.…”
Section: Discussionmentioning
confidence: 99%
“…A large number of algorithms have been developed for learning with noisy labels. Small loss selection recently achieved great success on noise-robust deep learning following the widely used criterion: DNNs tend to learn simple patterns first, then gradually memorize all samples [11,12,13]. These methods treat samples with small training loss as clean ones.…”
Section: Introductionmentioning
confidence: 99%