2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018) 2018
DOI: 10.1109/isbi.2018.8363600
|View full text |Cite
|
Sign up to set email alerts
|

SEGMENT3D: A web-based application for collaborative segmentation of 3D images used in the shoot apical meristem

Abstract: The quantitative analysis of 3D confocal microscopy images of the shoot apical meristem helps understanding the growth process of some plants. Cell segmentation in these images is crucial for computational plant analysis and many automated methods have been proposed. However, variations in signal intensity across the image mitigate the effectiveness of those approaches with no easy way for user correction. We propose a web-based collaborative 3D image segmentation application, SEGMENT3D, to leverage automatic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

4
4

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 18 publications
0
6
0
Order By: Relevance
“…In addition to the 2D projections, the obtained masked raw images can also be used for more extensive cell shape analyses in 3D+t and can directly be used as input for fully-automatic 3D segmentation methods if image quality suffices [8,1,3,15]. In order to measure cell shape properties in image data with limited image quality where fully automatic methods still fail to provide error-free segmentations, we employed the interactive graphical user interface SEGMENT3D by Spina et al [18] that allows the user to load small 3D crops, to manually draw scribbles within the cells of interest and to interactively correct remaining errors. A marker-based 3D watershed algorithm then uses the manual annotations to automatically compute the pixelaccurate segmentations.…”
Section: Cell Shape Segmentation In 2d and 3dmentioning
confidence: 99%
See 1 more Smart Citation
“…In addition to the 2D projections, the obtained masked raw images can also be used for more extensive cell shape analyses in 3D+t and can directly be used as input for fully-automatic 3D segmentation methods if image quality suffices [8,1,3,15]. In order to measure cell shape properties in image data with limited image quality where fully automatic methods still fail to provide error-free segmentations, we employed the interactive graphical user interface SEGMENT3D by Spina et al [18] that allows the user to load small 3D crops, to manually draw scribbles within the cells of interest and to interactively correct remaining errors. A marker-based 3D watershed algorithm then uses the manual annotations to automatically compute the pixelaccurate segmentations.…”
Section: Cell Shape Segmentation In 2d and 3dmentioning
confidence: 99%
“…After all frames are segmented properly, the 2D segments can be tracked over time, e.g., to qualitatively analyze the spatial rearrangement of the cells or to quantify temporal changes of shape features on the single cell level (data not shown). Moreover, the semi-automatically masked raw images can be further processed by manual [18] or automatic 3D segmentation approaches [15] and the proposed tracking module can be used to establish temporal correspondences between the cells via a largest spatial overlap of successive frames (Fig. 2C).…”
Section: Analyzing Drosophila Gastrulationmentioning
confidence: 99%
“…To investigate the segmentation quality in deeper layers as well, we densely labeled a 128 × 128 × 200 image region using SEGMENT3D, a new interactive and collaborative 3D segmentation correction tool [12]. Fig.…”
Section: Segmentation Of the Shoot Apical Meristem In A Thalianamentioning
confidence: 99%
“…On the other hand, the creation of manually annotated data sets is very time-consuming and tedious, causing those data sets to be rarely available. Although there are many classical and machine learning-assisted annotation tools accessible [6][7][8][9], which reduce the annotation time for biological experts, especially the dense annotation of 3D data remains difficult. Current approaches propose to reduce annotation efforts and increase generalizability of machine learning-based approaches by collecting a manifold of annotated image data from slightly different domains, creating a highly diverse training data set [10].…”
Section: Introductionmentioning
confidence: 99%