2019
DOI: 10.1007/978-3-030-31723-2_4
|View full text |Cite
|
Sign up to set email alerts
|

ESNet: An Efficient Symmetric Network for Real-Time Semantic Segmentation

Abstract: The recent years have witnessed great advances for semantic segmentation using deep convolutional neural networks (DCNNs). However, a large number of convolutional layers and feature channels lead to semantic segmentation as a computationally heavy task, which is disadvantage to the scenario with limited resources. In this paper, we design an efficient symmetric network, called (ESNet), to address this problem. The whole network has nearly symmetric architecture, which is mainly composed of a series of factori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(31 citation statements)
references
References 35 publications
(127 reference statements)
0
31
0
Order By: Relevance
“…To segment the COVID-19 infection from lung CT images, LungINFseg is compared with the state-of-the-art segmentation models, such as FCN [ 15 ], UNet [ 16 ], SegNet [ 17 ], FSSNet [ 18 ], SQNet [ 19 ], ContextNet [ 20 ], EDANet [ 21 ], CGNet [ 22 ], ERFNet [ 23 ], ESNet [ 24 ], DABNet [ 25 ], Inf-Net [ 12 ], and MIScnn [ 26 ] models. All these models are assessed both quantitatively and qualitatively.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…To segment the COVID-19 infection from lung CT images, LungINFseg is compared with the state-of-the-art segmentation models, such as FCN [ 15 ], UNet [ 16 ], SegNet [ 17 ], FSSNet [ 18 ], SQNet [ 19 ], ContextNet [ 20 ], EDANet [ 21 ], CGNet [ 22 ], ERFNet [ 23 ], ESNet [ 24 ], DABNet [ 25 ], Inf-Net [ 12 ], and MIScnn [ 26 ] models. All these models are assessed both quantitatively and qualitatively.…”
Section: Resultsmentioning
confidence: 99%
“… We propose the RFA module that can enlarge the receptive field of the segmentation models and increase the learning ability of the model without information loss. We present a comprehensive comparison with 13 state-of-the-art segmentation models, namely, FCN [ 15 ], UNet [ 16 ], SegNet [ 17 ], FSSNet [ 18 ], SQNet [ 19 ], ContextNet [ 20 ], EDANet [ 21 ], CGNet [ 22 ], ERFNet [ 23 ], ESNet [ 24 ], DABNet [ 25 ], Inf-Net [ 12 ], and MIScnn [ 26 ]. Extensive experiments were performed to provide ablation studies that add a thorough analysis of the proposed LungINFSeg (e.g., the effect of resolution size and variation of the loss function).…”
Section: Introductionmentioning
confidence: 99%
“…However, a thorough analysis, as shown in Figure 6, explains that there are actually two human silhouettes in difficult lighting conditions, so it was even a challenge to the manual labeling process for tag assignments in Ground Truth. 5 presents the segmentation results obtained with models ESNet [22], PSP-Net [23], LEDNet [24], ContextNet [16] and our models, respectively applied to the same images, achieving a better qualitative comparison of the results. Row (a)-The original images to use and Row (b)-The ground truth.…”
Section: Evaluation Methodsmentioning
confidence: 99%
“…However, it still presents some obvious errors in the extremities in all models, particularly the arm of some of the silhouettes framed in a yellow rectangle in our model. Finally, the segmentation results for the original image of the third column are presented: the third column row (c) column shows the segmentation results obtained using the ESNet [22] model; the third column row (d)-The segmentation results using the LEDNet model [24] and the third column row (e)-The results obtained with our model. It can be observed that in general terms, the application of a model specifically trained to detect people performs a finer detection of human silhouettes than its counterpart that uses multiple classes for training.…”
Section: Evaluation Methodsmentioning
confidence: 99%
See 1 more Smart Citation