2019
DOI: 10.1016/j.jbi.2019.103248
|View full text |Cite|
|
Sign up to set email alerts
|

Justifying diagnosis decisions by deep neural networks

Abstract: An integrated approach is proposed across visual and textual data to both determine and justify a medical diagnosis by a neural network. As deep learning techniques improve, interest grows to apply them in medical applications. To enable a transition to workflows in a medical context that are aided by machine learning, the need exists for such algorithms to help justify the obtained outcome so human clinicians can judge their validity. In this work, deep learning methods are used to map a frontal X-Ray image t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(29 citation statements)
references
References 31 publications
0
29
0
Order By: Relevance
“…The proposed approach was beneficial for gaining an in-depth understanding of the sense-making process during this critical task, as well as for identifying design requirements for better sense-making support. In [59], Deep Learning (DL) techniques were exploited to generate a diagnosis as textual representation from a frontal X-Ray image. Moreover, realistic X-Ray images related to the nearest alternative diagnosis were generated.…”
Section: Applications Of Medical Guismentioning
confidence: 99%
“…The proposed approach was beneficial for gaining an in-depth understanding of the sense-making process during this critical task, as well as for identifying design requirements for better sense-making support. In [59], Deep Learning (DL) techniques were exploited to generate a diagnosis as textual representation from a frontal X-Ray image. Moreover, realistic X-Ray images related to the nearest alternative diagnosis were generated.…”
Section: Applications Of Medical Guismentioning
confidence: 99%
“…in which O represents the fusion feature, W represents the mapping matrix that is used to map the high-dimensional fusion feature to a low-dimensional probability distribution representing the disease information and p i (1 ≤ i ≤ 14) represents the probability of identifying the ith disease [48].…”
Section: Network Structurementioning
confidence: 99%
“…Used by papers DenseNet [59] [ 15,37,83,84,87,90,143,147,154] ResNet [51] [ 39,43,47,60,65,87,92,136,145,146,148] VGG [116] [7, 35, 39, 50, 66, 74, 85, 93, 95, 149, 150] Faster R-CNN [106] [74, 149] Inception V3 [124] [117] GoogLeNet [123] [114] MobileNet V2 [58] [48] SRN [158] [43] U-Net [110] [122] EcNet (*) [155] FCN + shallow CNN (*) [125] RGAN (*) [46] StackGAN [151] (slightly modified version) (*) [120] CNN (*) [120, 126] CNN (unspecified architecture) [140,142] Table 4. Summary of convolutional neural network architectures used in the literature.…”
Section: Architecturementioning
confidence: 99%
“…RGAN, proposed by Han et al [46], is a novel architecture that follows the generative adversarial network (GAN) [40] approach, with a generative module comprising the encoder and decoder parts of an atrous convolution autoencoder (ACAE) with a spatial LSTM between them. Similarly, Spinks and Moens [120] used a slightly modified version of a StackGAN [151] to learn the mapping from report encoding to chest X-ray images, and a custom CNN to learn the inverse mapping. Both are trained together, but only the latter is part of the report generation pipeline during inference.…”
Section: Visual Componentmentioning
confidence: 99%