2022
DOI: 10.1109/tgrs.2022.3179379
|View full text |Cite
|
Sign up to set email alerts
|

MANet: Multi-Scale Aware-Relation Network for Semantic Segmentation in Aerial Scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
3

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(8 citation statements)
references
References 63 publications
0
6
0
Order By: Relevance
“…Backbone Complexity(G) Parameters(M) Speed(FPS) SegFormer [5] Mit-B1 63.3 13.7 31.3 BiSeNet [8] ResNet18 51.8 12.9 121.9 MANet [9] ResNet18 51.7 12.0 75.6 UNetformer [6] ResNet18 46.9 11.7 115.6 GDformer ResNet18 46.4 11.5 136.0…”
Section: Methodsmentioning
confidence: 99%
“…Backbone Complexity(G) Parameters(M) Speed(FPS) SegFormer [5] Mit-B1 63.3 13.7 31.3 BiSeNet [8] ResNet18 51.8 12.9 121.9 MANet [9] ResNet18 51.7 12.0 75.6 UNetformer [6] ResNet18 46.9 11.7 115.6 GDformer ResNet18 46.4 11.5 136.0…”
Section: Methodsmentioning
confidence: 99%
“…To demonstrate the advantages of the proposed method in defending against poisoning attacks and completing accurate semantic segmentation, we compare the proposed method with several state-of-the-art methods, including the CNNs-based methods and the Transformer-based methods. For the CNNs-based methods, the proposed RIFENet is compared with AFNet [69], SBANet [70], MANet [71], SSAtNet [72], and HFGNet [25]. For the Transformer-based methods, RIFENet is compared with STUFormer [73], EMRFormer [74], CONFormer [75], ATTFormer [76], and DSegFormer [77].…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…MANet [71]: This network uses discriminative feature learning to obtain fine-grained feature information and uses the multi-scale feature calibration module to filter redundant features to enhance feature representation. 4.…”
mentioning
confidence: 99%
“…It takes into account the effects of depth, width, and resolution factors and combines them to form the series of efficientnet. We chose the efficientnet-b1 network [34] as the backbone, while the number of channels for each layer of outputs is (24,40,112,320), and the number of MBConv basic modules used in each layer is (5, 3, 8, 7).…”
Section: Efficientnet As Encodermentioning
confidence: 99%