2022
DOI: 10.1109/lgrs.2021.3088277
|View full text |Cite
|
Sign up to set email alerts
|

MINet: Multilevel Inheritance Network-Based Aerial Scene Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 27 publications
0
1
0
Order By: Relevance
“…The two branches are merged with depthwise convolutions to decrease the dimensionality. Hu et al [108] introduced a multilevel inheritance network (MINet), where FPN based on ResNet-50 is adopted to acquire multilayer features. Subsequently, an attention mechanism is employed to augment the expressive capacity of features at each level.…”
Section: Pretrained Cnns For Feature Extractionmentioning
confidence: 99%
See 2 more Smart Citations
“…The two branches are merged with depthwise convolutions to decrease the dimensionality. Hu et al [108] introduced a multilevel inheritance network (MINet), where FPN based on ResNet-50 is adopted to acquire multilayer features. Subsequently, an attention mechanism is employed to augment the expressive capacity of features at each level.…”
Section: Pretrained Cnns For Feature Extractionmentioning
confidence: 99%
“…It is also noticed that [83,106] aims to solve multiple research problems. Combine GIST [156] on CNN Integrate multiple color features [157] Deep color network fusion Combine mid-level and deep-level information [158] Merge deep level feature and mid-level feature of encoder in decoder branch Extract multilevel feature maps [108] Multilayer network Improve performance in classifier learning [159] Statistical transfer via inter-class similarity…”
Section: Research Problem and Utilized Research Techniquesmentioning
confidence: 99%
See 1 more Smart Citation
“…The results show that our method outperforms fine‐tuning VGG‐19 on the NaSC‐TG2 dataset (training rate of 20%) and an overall accuracy improvement of 0.89%. Compared with fine‐tuning VGG‐19, GBNet [27], and MINet [28], the overall accuracy of VGG‐SA on the WHU‐RS19 dataset with a 40% training ratio improved by 0.52%, 2.32%, and 0.13%, respectively. The proposed method yields higher accuracy than state‐of‐the‐art methods on the AID dataset with a training ratio of 20%, which improves by 1.35%, 3.26%, and 0.17% compared with fine‐tuning VGG‐19, VGG‐VD16+SAFF [2], and MG‐CAP (Sqrt‐E) [29], respectively.…”
Section: Introductionmentioning
confidence: 99%