2020
DOI: 10.1007/978-3-030-59722-1_38
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Nuclei Segmentation in Histopathological Images Using Attention

Abstract: Segmentation and accurate localization of nuclei in histopathological images is a very challenging problem, with most existing approaches adopting a supervised strategy. These methods usually rely on manual annotations that require a lot of time and effort from medical experts. In this study, we present a self-supervised approach for segmentation of nuclei for whole slide histopathology images. Our method works on the assumption that the size and texture of nuclei can determine the magnification at which a pat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
32
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 40 publications
(32 citation statements)
references
References 21 publications
(31 reference statements)
0
32
0
Order By: Relevance
“…As shown in the table, our method outperforms all other unsupervised ones by a large margin in either testing scenarios. Notably, the gap is large even for the DL-based unsupervised approach denoted by "self-supervised" [14] in the table. As compared with supervised methods, CBM stands at a competitive position.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in the table, our method outperforms all other unsupervised ones by a large margin in either testing scenarios. Notably, the gap is large even for the DL-based unsupervised approach denoted by "self-supervised" [14] in the table. As compared with supervised methods, CBM stands at a competitive position.…”
Section: Resultsmentioning
confidence: 99%
“…For example, Qu et al [13] proposed a two-stage learning framework using coarse labels. Other researchers [14,15] investigated the self-supervised DL method to reduce the number of required labeled data by exploiting the observation that nuclei size and texture can determine the magnification scale. Besides, there is a domain shift problem [16] arising from stain and nuclei variations in histology images of different organs, patients and acquisition protocols.…”
Section: Introductionmentioning
confidence: 99%
“…Empirical evidence suggests that solving the auxiliary task (e.g., solving a jigsaw) serves as domain‐specific pre‐training by teaching the CNN to extract features that are useful for the main task (e.g., recognizing cancer) as well. Specifically, in cancer image analysis, Self‐Path (Koohbanani, Unnikrishnan, Khurram, Krishnaswamy, & Rajpoot, 2020) used domain specific self‐supervision tasks for effective learning and domain adaptation on histopathology images, while (Sahasrabudhe et al, 2020) used learning to detect patch magnification level as a pretext task for nucleus localization and segmentation.…”
Section: Advances In Deep Learning and Their Applications To Cancer Image Analysismentioning
confidence: 99%
“…Classical pre-text tasks include predicting image orientation [35] or relative position prediction [36], solving jigsaw puzzles [37], image inpainting [38], colorization [39] and many others [40]- [43]. More recently, equivariance has been employed to impose semantic consistency, either at keypoints [44], class activations [45], feature representations [46] or the network outputs [47]. Nevertheless, a main limitation of these works is that equivariance is enforced across affine transformations of the same image [17], [46], [48] or between virtually generated versions [47].…”
Section: B Self-trainingmentioning
confidence: 99%
“…More recently, equivariance has been employed to impose semantic consistency, either at keypoints [44], class activations [45], feature representations [46] or the network outputs [47]. Nevertheless, a main limitation of these works is that equivariance is enforced across affine transformations of the same image [17], [46], [48] or between virtually generated versions [47]. For example, Hung et al [47] further apply an appearance perturbation (e.g., color jittering) to the input image before the affine transformation.…”
Section: B Self-trainingmentioning
confidence: 99%