2020
DOI: 10.1371/journal.pone.0230605
|View full text |Cite
|
Sign up to set email alerts
|

Semantic segmentation of HeLa cells: An objective comparison between one traditional algorithm and four deep-learning architectures

Abstract: The quantitative study of cell morphology is of great importance as the structure and condition of cells and their structures can be related to conditions of health or disease. The first step towards that, is the accurate segmentation of cell structures. In this work, we compare five approaches, one traditional and four deep-learning, for the semantic segmentation of the nuclear envelope of cervical cancer cells commonly known as HeLa cells. Images of a HeLa cancer cell were semantically segmented with one tra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(23 citation statements)
references
References 79 publications
1
22
0
Order By: Relevance
“…For the cell not including the nucleus AC = 0.9629, J I = 0.8094, for the cell and the nucleus AC = 0.9655, J I = 0.8711 and for the nucleus alone AC = 0.9975, J I = 0.9665. The algorithm to segment the nucleus provided excellent results, and it had previously been reported that it outperformed several deep learning architectures [59]. The small differences between the segmented nucleus and that of a manual expert segmentation are due mainly to the calculations of the thickness of the NE and small invaginations (Figures 12, 13 right column).…”
Section: Resultsmentioning
confidence: 84%
See 2 more Smart Citations
“…For the cell not including the nucleus AC = 0.9629, J I = 0.8094, for the cell and the nucleus AC = 0.9655, J I = 0.8711 and for the nucleus alone AC = 0.9975, J I = 0.9665. The algorithm to segment the nucleus provided excellent results, and it had previously been reported that it outperformed several deep learning architectures [59]. The small differences between the segmented nucleus and that of a manual expert segmentation are due mainly to the calculations of the thickness of the NE and small invaginations (Figures 12, 13 right column).…”
Section: Resultsmentioning
confidence: 84%
“…This paper describes an extension to previous work which focused on the segmentation the NE of a cell from a cropped volume [59,60]. In this work individual HeLa cells and their nuclei are instance segmented in 3D.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Transfer learning, when relevant pre-trained parameters are available, is the default approach for extracting the best performance out of small training datasets ( Huh et al, 2016 ; Devlin et al, 2018 ). While ImageNet pre-trained models are sometimes used for cellular EM segmentation tasks ( Karabağ et al, 2020 ; Devan et al, 2019 ), high-level features learned from ImageNet may not be applicable to biological imaging domains ( Raghu et al, 2019 ). Building a more domain-specific annotated dataset large enough for pre-training would be a significant bottleneck, and indeed, it required multiple years to annotate the 3.2 × 10 6 images that form the basis of ImageNet.…”
Section: Introductionmentioning
confidence: 99%
“…Transfer learning, when relevant pre-trained parameters are available, is the default approach for extracting the best performance out of small training datasets [27] [28]. While ImageNet pre-trained models are sometimes used for cellular EM segmentation tasks [29] [30], high-level features learned from ImageNet may not be applicable to biological imaging domains [31].Building a more domain-specific annotated dataset large enough for pre-training would be a significant bottleneck, and indeed, it required multiple years to annotate the 3.2 x 10 6 images that form the basis of ImageNet. Fortunately, recent advances in unsupervised learning algorithms have now enabled effective pre-training and transfer learning without the need for any up front annotations; in fact, on many tested benchmarks, unsupervised pre-training leads to better transfer learning performance [32][33] [34] [35] [36][37] [38].…”
Section: Introductionmentioning
confidence: 99%