2021
DOI: 10.3390/s22010206
|View full text |Cite
|
Sign up to set email alerts
|

Generative Adversarial Networks for Morphological–Temporal Classification of Stem Cell Images

Abstract: Frequently, neural network training involving biological images suffers from a lack of data, resulting in inefficient network learning. This issue stems from limitations in terms of time, resources, and difficulty in cellular experimentation and data collection. For example, when performing experimental analysis, it may be necessary for the researcher to use most of their data for testing, as opposed to model training. Therefore, the goal of this paper is to perform dataset augmentation using generative advers… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 49 publications
0
2
0
Order By: Relevance
“…The high classification accuracy of 98-99% that we have obtained approaches and sometimes exceeds the performance scores of previously reported classification models applied to pluripotent stem cells [11,12,[15][16][17][18][19][20][21][22][23][24]. Morphological parameters of cells and colonies used as predictors in our models are biologically interpretable but require methods for their extraction from the images prior to classification.…”
Section: Discussionmentioning
confidence: 54%
See 2 more Smart Citations
“…The high classification accuracy of 98-99% that we have obtained approaches and sometimes exceeds the performance scores of previously reported classification models applied to pluripotent stem cells [11,12,[15][16][17][18][19][20][21][22][23][24]. Morphological parameters of cells and colonies used as predictors in our models are biologically interpretable but require methods for their extraction from the images prior to classification.…”
Section: Discussionmentioning
confidence: 54%
“…Methods for automated feature extraction from images and videos of hPSCs with the subsequent application of supervised ML algorithms constitute another approach, with the reported classification accuracy values higher than 87% [19][20][21]24]. DL-based classification models applied directly to the images of hPSCs have been reported to perform at about 90% accuracy [12,23].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…As shown in Figure 4 , the Transformer and Unet fusion model is constructed to predict the optical microscopy images by bridging CNN for extracting feature presentations and an efficient deformable Transformer for modeling the long-range dependency on the extracted feature maps. In our experiment, in the multi-layer perceptron (MLP) layers of the transformer model, the activation function GELU is replaced with ELU, which performs better because in medical images, negative values are as important as positive values, which is defined as ( Witmer and Bhanu, 2022 ): where hyper parameter α is set to 1.…”
Section: Methodsmentioning
confidence: 99%
“…The learning-enhanced cell optical image-analysis model is capable of acquiring the texture details from low-level source images and achieve higher resolution improvement for the label-free cell optical-imaging techniques ( Chen et al, 2016 ; Lee et al, 2020 ; Ullah et al, 2021 ; Ullah et al, 2022 ). The deep-learning pipeline of cell optical microscopy imaging can extract complex data representation in a hierarchical way, which is helpful to find hidden cell structures from the microscope images, such as the size of a single cell, the number of cells in a given area, the thickness of the cell wall, the spatial distribution between cells, and subcellular components and their densities ( Boslaugh and Watters, 2008 ; Donovan-Maiye et al, 2018 ; Falk et al, 2019 ; Manifold et al, 2019 ; Rezatofighi et al, 2019 ; Yao et al, 2019 ; Zhang et al, 2019 ; Lee et al, 2020 ; Voronin et al, 2020 ; Zhang et al, 2020 ; Chen et al, 2021a ; Gomariz et al, 2021 ; Manifold et al, 2021 ; Wang et al, 2022b ; Islam et al, 2022 ; Kim et al, 2022 ; Melanthota et al, 2022 ; Rahman et al, 2022 ; Ullah et al, 2022 ; Witmer and Bhanu, 2022 ).…”
Section: Introductionmentioning
confidence: 99%