2022
DOI: 10.1038/s41398-022-02242-z
|View full text |Cite
|
Sign up to set email alerts
|

Identification of texture MRI brain abnormalities on first-episode psychosis and clinical high-risk subjects using explainable artificial intelligence

Abstract: Structural MRI studies in first-episode psychosis and the clinical high-risk state have consistently shown volumetric abnormalities. Aim of the present study was to introduce radiomics texture features in identification of psychosis. Radiomics texture features describe the interrelationship between voxel intensities across multiple spatial scales capturing the hidden information of underlying disease dynamics in addition to volumetric changes. Structural MR images were acquired from 77 first-episode psychosis … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 103 publications
(129 reference statements)
0
4
0
Order By: Relevance
“…Several diverse fields have embraced the explainable component of AI, prioritising trustworthiness over accuracy. XAI has been applied in drug discovery [99,100], industrial applications [101,102], gaming [103,104], neurological disorders [105,106,107], neuroscience [108,109] and recommender systems [110,111]. This tremendous growth has led to several XAI-based review articles in the healthcare domain in the past years.…”
Section: Introductionmentioning
confidence: 99%
“…Several diverse fields have embraced the explainable component of AI, prioritising trustworthiness over accuracy. XAI has been applied in drug discovery [99,100], industrial applications [101,102], gaming [103,104], neurological disorders [105,106,107], neuroscience [108,109] and recommender systems [110,111]. This tremendous growth has led to several XAI-based review articles in the healthcare domain in the past years.…”
Section: Introductionmentioning
confidence: 99%
“…With the availability of post hoc explainable methods [31], explaining black-box DL models has become easier. Studies such as [30,[32][33][34] use such explainability techniques on ML models. Nevertheless, the use of explainability has been limited to either the extraction of the most influential model features/inputs or providing model-specific improvements to outputs.…”
Section: Introductionmentioning
confidence: 99%
“…Studies such as [ 43 , 44 , 45 ] use explainability techniques on ML models to obtain insights into the model outputs. Moreover, recent works have begun exploring explainability in mental health settings [ 24 , 46 , 47 , 48 , 49 ].…”
Section: Introductionmentioning
confidence: 99%