2023
DOI: 10.1109/lgrs.2023.3254049
|View full text |Cite
|
Sign up to set email alerts
|

Remote Sensing Image Fusion With Task-Inspired Multiscale Nonlocal-Attention Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 25 publications
0
2
0
Order By: Relevance
“…FRPNet [65] Scale-varying Feature return pyramid structure SBFPN [66] Scale-varying Bidirectional Feature pyramid structure BMFFN [67] Scale-varying Bidirectional multiscale feature fusion network MFPNet [68] Scale-varying Receiver field blocks(RFBs) SSPNet [69] Conflicting information CAM+SSM SMAG [70] Conflicting information Multi-scale supervision module MSAN [71] Scale-varying Multi-scale activation feature fusion block (MAFB) HA-MHGEN [72] Scale-varying Extract explicit and implicit relationships RRNet [73] Scale-varying Parallel multi-scale attention (PMA) A-MLFFMs [74] Scale-varying Adaptively integrate the multi-level outputs MNAN [75] Scale-varying Enhancing multi-scale targets FE-CenterNet [76] Scale-varying FAS+AGS CDD-Net [77] Scale-varying LCFN+HAPN AdaZoom [78] Scale-varying Variable magnification for adaptive multi-scale detection ZoomInNet [79] Scale-varying Adaptive key distillation point(AKDP) UFPMP-Det [80] Scale-varying Unified foreground packing(UFP) SRAF-Net [81] Scale-varying Context-based deformable (CBD) GSDet [82] Scale-varying Converts GSD regression into a probabilistic estimation process GFA-Net [83] Scale-varying Graph Focusing Process (GFP) maps, the attention-guided module enriches the feature representation and enables more comprehensive and discriminative feature learning. The multi-scale attention network [71] is proposed, which incorporates the multi-scale activation feature fusion block to achieve multi-scale attention.…”
Section: Problem Solved Optimization Strategiesmentioning
confidence: 99%
See 1 more Smart Citation
“…FRPNet [65] Scale-varying Feature return pyramid structure SBFPN [66] Scale-varying Bidirectional Feature pyramid structure BMFFN [67] Scale-varying Bidirectional multiscale feature fusion network MFPNet [68] Scale-varying Receiver field blocks(RFBs) SSPNet [69] Conflicting information CAM+SSM SMAG [70] Conflicting information Multi-scale supervision module MSAN [71] Scale-varying Multi-scale activation feature fusion block (MAFB) HA-MHGEN [72] Scale-varying Extract explicit and implicit relationships RRNet [73] Scale-varying Parallel multi-scale attention (PMA) A-MLFFMs [74] Scale-varying Adaptively integrate the multi-level outputs MNAN [75] Scale-varying Enhancing multi-scale targets FE-CenterNet [76] Scale-varying FAS+AGS CDD-Net [77] Scale-varying LCFN+HAPN AdaZoom [78] Scale-varying Variable magnification for adaptive multi-scale detection ZoomInNet [79] Scale-varying Adaptive key distillation point(AKDP) UFPMP-Det [80] Scale-varying Unified foreground packing(UFP) SRAF-Net [81] Scale-varying Context-based deformable (CBD) GSDet [82] Scale-varying Converts GSD regression into a probabilistic estimation process GFA-Net [83] Scale-varying Graph Focusing Process (GFP) maps, the attention-guided module enriches the feature representation and enables more comprehensive and discriminative feature learning. The multi-scale attention network [71] is proposed, which incorporates the multi-scale activation feature fusion block to achieve multi-scale attention.…”
Section: Problem Solved Optimization Strategiesmentioning
confidence: 99%
“…The attention-based multi-level feature fusion modules provide a flexible and adaptive framework for effectively exploiting the hierarchical representations of FPN, enabling more accurate and robust target detection across different scales. A multi-scale nonlocal attention-based network [75], which designed to effectively capture and fuse information at multiple scales, allowing for comprehensive analysis of the scene and enhancing the detection performance for objects of various sizes. Tianjun Shi [76] proposes an anchor-free detector that mines multi-scale contextual information using a feature enhancement module consisting of a feature aggregation structure and an attention generation structure, combined with a coordinate attention mechanism to suppress the interference of false alarms in the scene, thus improving the perception of small objects.…”
Section: Problem Solved Optimization Strategiesmentioning
confidence: 99%