2023
DOI: 10.1007/s11432-022-3592-8
|View full text |Cite
|
Sign up to set email alerts
|

Double-branch fusion network with a parallel attention selection mechanism for camouflaged object detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 36 publications
0
2
0
Order By: Relevance
“…Attention mechanisms 19,20 have become a widely used technique in various sequence‐based tasks. One of the main advantages of attention mechanisms is the ability to handle inputs of variable sizes by focusing on the most relevant parts of the input to make informed decisions.…”
Section: Gat Architecture and Federated Learningmentioning
confidence: 99%
“…Attention mechanisms 19,20 have become a widely used technique in various sequence‐based tasks. One of the main advantages of attention mechanisms is the ability to handle inputs of variable sizes by focusing on the most relevant parts of the input to make informed decisions.…”
Section: Gat Architecture and Federated Learningmentioning
confidence: 99%
“…For example, in image classification tasks, channel attention can help the network to better distinguish feature differences between different categories [ 13 , 14 , 15 , 16 , 17 ]. In a target detection task, channel attention improves the network’s ability to accurately locate and recognize targets [ 18 , 19 , 20 , 21 , 22 ]. However, current channel attention modules simply and crudely use global pooling in the compression process, which may lose locally important information and lead to relatively limited modeling of complex semantic relationships.…”
Section: Introductionmentioning
confidence: 99%
“…For example, in image classification tasks, channel attention can help the network to better distinguish feature differences between different categories [13][14][15][16][17]. In a target detection task, channel attention improves the network's ability to accurately locate and recognize targets [18][19][20][21][22]. However, current channel attention modules simply and crudely use global pooling in the compression Since channel attention in the attention mechanism also has the use of category semantic information, we introduced the channel attention mechanism in the feature fusion phase, as shown in Figure 2, No.…”
Section: Introductionmentioning
confidence: 99%