2019
DOI: 10.3390/app9194043
|View full text |Cite
|
Sign up to set email alerts
|

MFCSNet: Multi-Scale Deep Features Fusion and Cost-Sensitive Loss Function Based Segmentation Network for Remote Sensing Images

Abstract: Semantic segmentation of remote sensing images is an important technique for spatial analysis and geocomputation. It has important applications in the fields of military reconnaissance, urban planning, resource utilization and environmental monitoring. In order to accurately perform semantic segmentation of remote sensing images, we proposed a novel multi-scale deep features fusion and cost-sensitive loss function based segmentation network, named MFCSNet. To acquire the information of different levels in remo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(10 citation statements)
references
References 31 publications
0
8
0
Order By: Relevance
“…However, the MSFFU-Net model proposed in this paper has better classification performance and the ability to detect more tiny vessels. It proves that the feature fusion decoder structure applied max-pooling indices can recorde more accurately the retinal vascular edge and location information, and the multi-scale feature extraction encoder based on Inception module can make the thin tiny retinal blood vessel features more discriminative, which can present excellent segmentation performance [ 35 ]. Therefore, it also demonstrates that the proposed model has better performance on retinal blood vessels segmentation than the U-Net model.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the MSFFU-Net model proposed in this paper has better classification performance and the ability to detect more tiny vessels. It proves that the feature fusion decoder structure applied max-pooling indices can recorde more accurately the retinal vascular edge and location information, and the multi-scale feature extraction encoder based on Inception module can make the thin tiny retinal blood vessel features more discriminative, which can present excellent segmentation performance [ 35 ]. Therefore, it also demonstrates that the proposed model has better performance on retinal blood vessels segmentation than the U-Net model.…”
Section: Resultsmentioning
confidence: 99%
“…The cross-entropy loss function is defined as follows: where and are the ground truth and prediction, respectively is 0 or 1 and is between 0 and 1. The cost-sensitive matrix is incorporated, as shown in formula (3), this can avoid under-fitting due to the small number of retinal blood vessels during the neural network learning process [ 35 ]. When retinal blood vessels are misclassified, the cost will be greater and the attention to retinal blood vessels will be increased: …”
Section: Proposed Methodsmentioning
confidence: 99%
“…The decoder fused and evaluated image features by scale fusion and scale sensitivity. In 2019, Wang et al [77] designed multiscale trait detection methods based on encoding and decoding structure for fusing two different phases (such as low-level and high-level) denotative data.…”
Section: Edge Detection Technology Based On Encoding and Decodingmentioning
confidence: 99%
“…Two modules, channel feature compression (CFC) and multi-level feature aggregation upsample (MFAU), were designed to reduce the loss of details and make the edge clear. Moreover, Wang et al [28] defined a cost-sensitive loss function in addition to fuse multi-scale deep features.…”
Section: Introductionmentioning
confidence: 99%