2023
DOI: 10.1109/tmi.2022.3221666
|View full text |Cite
|
Sign up to set email alerts
|

Which Pixel to Annotate: A Label-Efficient Nuclei Segmentation Framework

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 88 publications
(56 reference statements)
0
4
0
Order By: Relevance
“…(b) Equispaced sampling denotes as Equispaced . (c) One-shot active learning methods include K -median Kolluru et al (2021), K -means Arthur and Vassilvitskii (2007), Farthest Point Sampling (FPS) Moenning and Dodg-son (2003), Representative Annotation (RA) Zheng et al (2019) and Consistency-based Patch Selection (CPS) Lou et al (2022). For sub-volume selection methods, we only compare CGS with random and equispaced selection due to the lack of related research.…”
Section: Resultsmentioning
confidence: 99%
“…(b) Equispaced sampling denotes as Equispaced . (c) One-shot active learning methods include K -median Kolluru et al (2021), K -means Arthur and Vassilvitskii (2007), Farthest Point Sampling (FPS) Moenning and Dodg-son (2003), Representative Annotation (RA) Zheng et al (2019) and Consistency-based Patch Selection (CPS) Lou et al (2022). For sub-volume selection methods, we only compare CGS with random and equispaced selection due to the lack of related research.…”
Section: Resultsmentioning
confidence: 99%
“…However, the handcrafted features limit the representation capabilities of nuclei entities. Recently, the nuclei classification models usually infer cell types based on the CNNs for nucleus segmentation (Zhang et al 2017;Basha et al 2018;Lou et al 2022Lou et al , 2023bMa et al 2023;Yu et al 2023) or nucleus centroid detection (Abousamra et al 2021;Huang et al 2023b). Graham et al (2019) propose a CNN of three branches, predicting nucleus types for the segmented nucleus instances.…”
Section: Related Workmentioning
confidence: 99%
“…To avoid distorting the image by sharpening the RGB image directly (because RGB lighting and color are mixed and it is difficult to apply convolution to the image without changing the color), it is necessary to first turn the RGB image into a YUV image by converting the three RGB channels into one channel representing luminance and two channels representing chrominance [54]. Assuming that the original image is represented as Drgb${D_{rgb}}$.…”
Section: System Model Designmentioning
confidence: 99%