2020
DOI: 10.1109/access.2020.3001571
|View full text |Cite
|
Sign up to set email alerts
|

Segmentation of Cell Images Based on Improved Deep Learning Approach

Abstract: The improved U_net algorithm based on mixed convolution blocks (McbUnet), which combines the advantages of U-Net and residual learning, is proposed for cell image segmentation in this paper. The network is mainly composed of two kinds of mixed convolution blocks. There are three main benefits to this algorithm. First, the convolution block can utilize different size kernels to overcome the limitation of a single size convolution kernel in traditional deep convolution. Second, in the mixed convolution blocks, t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…If the traditional FAST9-16 algorithm is used for detection, it meets the requirement that the gray value of more than 9 continuous pixels in the neighborhood of 16 pixels is sufficiently different. So, the system will identify it as a corner point, and the point p is only an edge point [13,14]. erefore, in order to exclude the interference of such edge points on the detection results, the FAST algorithm is improved as follows: 24-pixel points around the pixel point p are taken as the detection template, the gray value of the point p is I p , and a threshold T is set.…”
Section: 1mentioning
confidence: 99%
See 1 more Smart Citation
“…If the traditional FAST9-16 algorithm is used for detection, it meets the requirement that the gray value of more than 9 continuous pixels in the neighborhood of 16 pixels is sufficiently different. So, the system will identify it as a corner point, and the point p is only an edge point [13,14]. erefore, in order to exclude the interference of such edge points on the detection results, the FAST algorithm is improved as follows: 24-pixel points around the pixel point p are taken as the detection template, the gray value of the point p is I p , and a threshold T is set.…”
Section: 1mentioning
confidence: 99%
“…In formula (14), S(x, y) is the image of the object, R(x, y) is the reflection component of the object itself, and L(x, y) is the illumination component.…”
Section: Msrcrmentioning
confidence: 99%
“…As future research, we plan to further improve the segmentation performance using a larger and on ImageNet pre-trained encoder or mixed convolution blocks [ 35 ], test-time augmentation [ 36 ], and the synthetic generation of new training samples [ 16 ]. In addition, studies about how cell features, e.g., size, shape and texture, influence the generalization ability to new cell types are needed.…”
Section: Discussionmentioning
confidence: 99%
“…In addition, a cascaded two-dimensional neural network with an intermediate statistical model for the segmentation of the knee meniscus is proposed, which is used to generate smaller patch input for the three-dimensional neural network model. The author introduces cytological analysis computational tools such as cell segmentation deep learning techniques capable of processing both free-floating and clumps of abnormal cells from digitised images of traditional Pap smears with a high overlapping rate, and cell image segmentation, in previous studies, no one proposed image segmentation for drug, diseased, and mitochondria cell, and some authors proposed image segmentation for medical, but no one proposed image segmentation for drug-treated image cell, diseased cell image, and mitochondria cell image, there is a gap in this area, that is why we work in this area, and we proposed a new algorithm for image segmentation, drug, diseased cell image, and mitochondria cell image [ 23 , 24 ]. More research is needed in this area, particularly in mitochondrial cells for measuring oxidative stress using machine learning.…”
Section: Introductionmentioning
confidence: 99%