2022
DOI: 10.1016/j.jbi.2022.103996
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating pointwise reliability of machine learning prediction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 33 publications
(10 citation statements)
references
References 67 publications
0
10
0
Order By: Relevance
“…Most classification studies using dFNC and sFNC have given insight into models performance for individual classes [9][10] but not subtypes. Discerning from the data of a patient whether a CDSS is likely to be accurate for them will be vital to ensuring that they receive proper care [2].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most classification studies using dFNC and sFNC have given insight into models performance for individual classes [9][10] but not subtypes. Discerning from the data of a patient whether a CDSS is likely to be accurate for them will be vital to ensuring that they receive proper care [2].…”
Section: Introductionmentioning
confidence: 99%
“…If neuroimaging clinical decision support systems (CDSS) are ever to be implemented in a clinical setting, they must be both robust and reliable [1]. One aspect of this reliability is that clinicians and need to not only know whether there are systematic differences in how the model will perform for different patients [2]. Neurological and neuropsychological disorder subtyping could contribute to more reliable CDSS [3] [4].…”
Section: Introductionmentioning
confidence: 99%
“…A confusion matrix is a technique for summarizing and describing the performance of classification algorithms on a set of test data for which the true values are known [31]. The accuracy of classification is sometimes misleading due to the difference in the number of observations in each category or the multiplicity of categories in the data set [32]. In our work, we evaluated model performance by measures of accuracy, precision, and recall in ML.…”
Section: Data Preprocessingmentioning
confidence: 99%
“…simulating dataset shift. This out-of-distribution samples can be exploited to test the robustness, reliability and explainability of ML classifiers [5] .…”
Section: Value Of the Datamentioning
confidence: 99%