2020
DOI: 10.1101/2020.03.24.004085
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Channel Embedding for Informative Protein Identification from Highly Multiplexed Images

Abstract: Interest is growing rapidly in using deep learning to classify biomedical images, and interpreting these deep-learned models is necessary for life-critical decisions and scientific discovery. Effective interpretation techniques accelerate biomarker discovery and provide new insights into the etiology, diagnosis, and treatment of disease. Most interpretation techniques aim to discover spatially-salient regions within images, but few techniques consider imagery with multiple channels of information. For instance… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 19 publications
(54 reference statements)
0
7
0
Order By: Relevance
“…Regarding the technical approach to provide transparency, the incorporation of medical experts motivated designers to incorporate prior knowledge directly into the model structure and/or inference for medical imaging (73%/64% articles with/without the incorporation of end users do not need a second model to generate transparency). enabled the generation of pixel-attribution methods 54 to visualize pixel-level importance for a specific class of interest [55][56][57][58][59][60][61][62][63][64][65][66][67][68][69] . In segmentation tasks, where clinically relevant abnormalities and organs are usually of small sizes, features from different resolution levels were aggregated to compute attention and generate more accurate outcomes, as demonstrated in multiple applications, e.g., multi-class segmentation in fetal Magnetic Resonance Imagings (MRIs) 58 and multiple sclerosis segmentation in MRIs 61 .…”
Section: In: Incorporationmentioning
confidence: 99%
See 1 more Smart Citation
“…Regarding the technical approach to provide transparency, the incorporation of medical experts motivated designers to incorporate prior knowledge directly into the model structure and/or inference for medical imaging (73%/64% articles with/without the incorporation of end users do not need a second model to generate transparency). enabled the generation of pixel-attribution methods 54 to visualize pixel-level importance for a specific class of interest [55][56][57][58][59][60][61][62][63][64][65][66][67][68][69] . In segmentation tasks, where clinically relevant abnormalities and organs are usually of small sizes, features from different resolution levels were aggregated to compute attention and generate more accurate outcomes, as demonstrated in multiple applications, e.g., multi-class segmentation in fetal Magnetic Resonance Imagings (MRIs) 58 and multiple sclerosis segmentation in MRIs 61 .…”
Section: In: Incorporationmentioning
confidence: 99%
“…64 a deletion curve was constructed by plotting the dice score vs. the percentage of pixels removed and ref. 55 defined a recall rate when the model proposes certain number of informative channels 95 proposed to evaluate the consistency of visualization results and the outputs of a CNN by computing the L1 error between predicted class scores and explanation pixel-attribution maps. In summary, while the methods grouped in this theme are capable of evaluating how well a method aligns with it's intended mechanism of transparency, they fall short of capturing any human factors-related aspects of transparency design.…”
Section: R: Reportingmentioning
confidence: 99%
“…In order to quantify how these features may be important, we use a deep learning method that quantifies the channel-wise importance for reconstructing imaging features across all channels. A similar method to the one described here uses the gradient of the model to determine the channel-wise importance for cell type classification [28]. The key difference of our proposed method is the objective of the model (reconstruction instead of classification) which requires a different architecture.…”
Section: Deep Learning Gradient-based Selectionmentioning
confidence: 99%
“…A measure related to completeness was defined in [43] and aimed to capture the proportion of training images represented by the learned visual concepts, in addition to two other metrics: the inter-and intraclass diversity and the faithfulness of explanations computed by perturbing relevant patches and measuring the drop in classification confidence. Other articles followed a similar approach to validate relevant pixels or features identified with a transparent method; for example, in [83] a deletion curve was constructed by plotting the dice score vs. the percentage of pixels removed and [1] defined a recall rate when the model proposes certain number of informative channels. [111] proposed to evaluate the consistency of visualization results and the outputs of a CNN by computing the L1 error between predicted class scores and explanation heatmaps.…”
Section: R: Reportingmentioning
confidence: 99%
“…The complex nature of both 3D imaging in radiology and pathological images makes image analysis tasks more time consuming than 2D image analysis that is more prevalent in other specialities, such as dermatology, which motivates transparency as an alternative to complete human image analysis to save time while retaining trustworthiness. In detail, classification problems in 3D radiological images and pathological images included abnormality detection in computed tomography (CT) scans [3,5,61,47,89,107,111,112], MRIs [34,11,38,51,59,87,85,50,95,77,78,98,100,104], pathology images [1,24,26,27,30,34,37,40,82,84,50,74,76,108,5] and positron emission tomography (PET) images [68]. Mammography dominated the 2D radiology image applications [60,88,44,45,86,96,99,…”
Section: T: Taskmentioning
confidence: 99%