2022
DOI: 10.1109/tgrs.2022.3201755
|View full text |Cite
|
Sign up to set email alerts
|

All Grains, One Scheme (AGOS): Learning Multigrain Instance Representation for Aerial Scene Classification

Abstract: Aerial scene classification remains challenging as: 1) the size of key objects in determining the scene scheme varies greatly; 2) many objects irrelevant to the scene scheme are often flooded in the image. Hence, how to effectively perceive the region of interests (RoIs) from a variety of sizes and build more discriminative representation from such complicated object distribution is vital to understand an aerial scene. In this paper, we propose a novel all grains, one scheme (AGOS) framework to tackle these ch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(10 citation statements)
references
References 87 publications
0
8
0
Order By: Relevance
“…Medical image segmentation has been developed rapidly owing to the stronger representation from deep learning techniques (Bi et al 2022;Ji et al 2022;Li et al 2021a).…”
Section: Related Workmentioning
confidence: 99%
“…Medical image segmentation has been developed rapidly owing to the stronger representation from deep learning techniques (Bi et al 2022;Ji et al 2022;Li et al 2021a).…”
Section: Related Workmentioning
confidence: 99%
“…Wang and Lan (2021) and Miao et al (2023) proposed similar single-CNN methods for fusing the deformed or decoupled features. Bi et al (2022) proposed a single-CNN method by fusing reformulated features through a so-called multigrain-perception module. Lv et al (2022) proposed a single-ViT method using a channel-attention module to fuse the channel and spatial features.…”
Section: Related Workmentioning
confidence: 99%
“…(2023) proposed similar single-CNN methods for fusing the deformed or decoupled features. Bi et al. (2022) proposed a single-CNN method by fusing reformulated features through a so-called multigrain-perception module.…”
Section: Related Workmentioning
confidence: 99%
“…LGRINet (Xu et al, 2022a(Xu et al, , 2022b(Xu et al, , 2022c(Xu et al, , 2022d 4 (Shen et al, 2022a) 3.8 95.32 (TR-60%) TST-Net (Chen et al, 2018) KD 1.0 80.00(TR-60%) ESD-MBENet (Zhao et al, 2022) 23.9 93.05 6 0.18 95.36 6 0.14 ET-GSNet (Xu et al, 2022a(Xu et al, , 2022b(Xu et al, , 2022c(Xu et al, , 2022d (Bi et al, 2022) Feature refining >12.5 93.04 6 0.35 94.91 6 0.17 MGS-Net (Guo et al, 2022) 244.2 91.92 6 0.12 94.33 6 0.08 GSCCTL-Net (Song and Yang, 2022) None 91.96 None ViT-Huge (Bazi et al, 2021) Single ViT 86 93.83 6 0.46 None ViT-AEv2 (Wang et al, 2023) 18.8 94.41 6 0.11 95.60 6 0.06 SC-ViT (Lv et al, 2022) 40.1 92.72 6 0.04 94.66 6 0.10 DFAGC-Net (Xu et al, 2022a(Xu et al, , 2022b(Xu et al, , 2022c(Xu et al, , 2022d Multiple models None None 89.29 6 0.28 GRMA-Net (Li et al, 2022) 54.1 93.67 6 0.21 95.32 6 0.28 ACNet (Tang et al, 2021) >276.6 91.09 6 0.13 92.42 6 0.16 T-CNN (Wang et al, 2022b) 15.9 90.25 6 0.14 93.05 6 0.12 GLDBS-Net (Xu et al, 2022a(Xu et al, , 2022b(Xu et al, , 2022c…”
Section: Efficient Knowledge Distillationmentioning
confidence: 99%
“…Similarly, Shi et al (2022) and Bai et al (2022) introduced their methods using variants of standard convolution structures. Moreover, Zhang et al (2022), Bi et al (2022), Guo et al (2022), Song and Yang (2022) and Miao et al (2023) also proposed CNN feature refining methods using Laplacian blocks, multigrain formulation, multi-granularity learning, scene clustering, or multi-granularity decoupling, respectively. Given that module insertion can weaken the pre-training effect, these studies should initially pre-train their modified deep models on large-scale data sets to ensure their training begins from a more advantageous point (Song and Zhou, 2023).…”
Section: Related Workmentioning
confidence: 99%