Proceedings of the 37th ACM/SIGAPP Symposium on Applied Computing 2022
DOI: 10.1145/3477314.3507112
|View full text |Cite
|
Sign up to set email alerts
|

Dam-Al

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 17 publications
0
0
0
Order By: Relevance
“…Third, to prevent overfitting and hasten the training of the suggested model, a sequential spectral-spatial attention module has been integrated into a residual block. DAM-AL [25] proposes dilated attention mechanism that has spatial structural features of low level with spatial attention and context features of high level with channel-wise attention. Moreover, dilated attention network has to skip connections and atrous block layers to support capturing both high and low-level features correlating between foreground and background and recapturing the change of context relationship around the object boundary by a prototypical network iteratively.…”
Section: Attention Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…Third, to prevent overfitting and hasten the training of the suggested model, a sequential spectral-spatial attention module has been integrated into a residual block. DAM-AL [25] proposes dilated attention mechanism that has spatial structural features of low level with spatial attention and context features of high level with channel-wise attention. Moreover, dilated attention network has to skip connections and atrous block layers to support capturing both high and low-level features correlating between foreground and background and recapturing the change of context relationship around the object boundary by a prototypical network iteratively.…”
Section: Attention Networkmentioning
confidence: 99%
“…Here Dilated attention mechanism [25] is employed as an attention block to extract multi-scale features. Low-level structural features are extracted by low-level layers(conv1, conv2) of the attention network and high-level context is produced by layers conv3, conv4 and conv5.…”
Section: B Attention Blockmentioning
confidence: 99%