2023
DOI: 10.3390/rs15061546
|View full text |Cite
|
Sign up to set email alerts
|

Credible Remote Sensing Scene Classification Using Evidential Fusion on Aerial-Ground Dual-View Images

Abstract: Due to their ability to offer more comprehensive information than data from a single view, multi-view (e.g., multi-source, multi-modal, multi-perspective) data are being used more frequently in remote sensing tasks. However, as the number of views grows, the issue of data quality is becoming more apparent, limiting the potential benefits of multi-view data. Although recent deep neural network (DNN)-based models can learn the weight of data adaptively, a lack of research on explicitly quantifying the data quali… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 45 publications
0
5
0
Order By: Relevance
“…Unlike the traditional scene classification task, the loss function of MVEDFN is computed by integrating the Dirichlet distribution. To ensure that both perspectives can offer reasonable guidance for multi-view scene classification, the loss function of MVEDFN incorporates a hybrid approach by combining the aerial view Dirichlet distribution integral loss L loss α 1 i , the ground view Dirichlet distribution integral loss L loss α 2 i , and the fusion of multi-view with the Dirichlet distribution integral loss L loss α i f usion during network training [31], specifically, as shown in Equations ( 9) and ( 10):…”
Section: Loss Functionmentioning
confidence: 99%
See 3 more Smart Citations
“…Unlike the traditional scene classification task, the loss function of MVEDFN is computed by integrating the Dirichlet distribution. To ensure that both perspectives can offer reasonable guidance for multi-view scene classification, the loss function of MVEDFN incorporates a hybrid approach by combining the aerial view Dirichlet distribution integral loss L loss α 1 i , the ground view Dirichlet distribution integral loss L loss α 2 i , and the fusion of multi-view with the Dirichlet distribution integral loss L loss α i f usion during network training [31], specifically, as shown in Equations ( 9) and ( 10):…”
Section: Loss Functionmentioning
confidence: 99%
“…The existing methods for the fusion classification of multi-view remote sensing scene images can be broadly classified into three levels: data level, feature level, and decision level. Therefore, the MVEDFN method proposed in this paper is compared with a data-level fusion method (the six-channel method [42]), two feature-level fusion methods (CILM [23] and MSAN [24]), and four decision-level fusion methods (SoftMax product [22], SoftMax sum [22], EFN [31], and TMC [41]). The following techniques are briefly outlined.…”
Section: Comparison Experimentsmentioning
confidence: 99%
See 2 more Smart Citations
“…Deep neural networks (DNNs) have achieved excellent performance in many remote sensing applications, such as object detection [4][5][6][7][8][9], image classification [10][11][12][13][14], and semantic segmentation [15][16][17][18][19][20]. However, DNNs have been shown by Szegedy [21] to be vulnerable to adversarial examples.…”
Section: Introductionmentioning
confidence: 99%