2020
DOI: 10.1088/1742-6596/1678/1/012106
|View full text |Cite
|
Sign up to set email alerts
|

Deeplabv3+ semantic segmentation model based on feature cross attention mechanism

Abstract: Aiming at the problem that the deeplabv3+ model is not accurate in segmentation of the image target edge, the image feature fitting is slow, and the attention information cannot be effectively used. It is proposed to add a feature cross attention module (FCA) to the model. The cross-attention network is composed of two branches and a feature cross attention module. Among them, the shallow branch is used to extract low-level spatial information, and the deep branch is used to extract high-level context features… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…So, there is more room for improving the algorithm. In future research, the classification performance of the algorithm may be further improved by adding further attention mechanisms to the network [45,46] or by replacing the basic structural units of the backbone network [47] to better apply to the identification of mangrove communities.…”
Section: Discussionmentioning
confidence: 99%
“…So, there is more room for improving the algorithm. In future research, the classification performance of the algorithm may be further improved by adding further attention mechanisms to the network [45,46] or by replacing the basic structural units of the backbone network [47] to better apply to the identification of mangrove communities.…”
Section: Discussionmentioning
confidence: 99%
“…Attention mechanisms have proven to be effective in computer vision, allowing models to focus on relevant parts of the input by assigning different weights to different regions [ 46 ]. Several works have demonstrated that incorporating attention can enhance the performance of semantic segmentation models, including DeepLabv3+ [ 47 , 48 , 49 ]. In our work, we have chosen to use the SimAM (Simple Attention Module) [ 10 ].…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The overall structure of the improved DeepLabv3 plus [8,9,10] model is shown in Figure 1 below. In the encoder section, MobileneV2 is used as the main part to accelerate prediction speed.…”
Section: Improve Deeplabv3 Plus Networkmentioning
confidence: 99%
“…We propose an improved algorithm based on DeepLabV3 plus, which can effectively optimize the original network structure. By reconstructing the ASPP [12] module in DeepLabv3, utilizing stripe pooling instead of global average pooling, and modifying its void ratio to fuse information from different scales, a denser feature scale range is obtained in a densely connected manner, thereby improving the segmentation accuracy of network recognition.…”
Section: Introductionmentioning
confidence: 99%