2019
DOI: 10.1109/access.2019.2926092
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Multimodal Fusion for Ground-Based Cloud Classification in Weather Station Networks

Abstract: Recently, the multimodal information is taken into consideration for ground-based cloud classification in weather station networks, but intrinsic correlations between the multimodal information and the visual information cannot be mined sufficiently. We propose a novel approach called hierarchical multimodal fusion (HMF) for ground-based cloud classification in weather station networks, which fuses the deep multimodal features and the deep visual features in different levels, i.e., low-level fusion and high-le… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(25 citation statements)
references
References 27 publications
0
18
0
Order By: Relevance
“…Liu. et al [17] proposed a method integrates the high-level fusion and the output of low-level fusion with deep visual features and deep multimodal features. Shi.…”
Section: Related Workmentioning
confidence: 99%
“…Liu. et al [17] proposed a method integrates the high-level fusion and the output of low-level fusion with deep visual features and deep multimodal features. Shi.…”
Section: Related Workmentioning
confidence: 99%
“…They obtained the classification result of 86.48% over 8000 ground-based cloud samples. Liu et al [63] fused deep multimodal and deep visual features in a two-level fashion, i.e., low-level and high-level. The low-level fused the heterogeneous features directly and its output was regarded as a part of the input of the high-level which also integrates deep visual and deep multimodal features.…”
Section: Overall Discussionmentioning
confidence: 99%
“…The comparison results between the proposed MMFN and other methods, such as [32,62,63], are summarized in Table 2. Firstly, most results in the right part of the table are more competitive than those in the left part, which indicates that the multi-modal information contains useful information for ground-based cloud recognition.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, plenty of works (Shi et al, 2017;Ye et al, 2017) have obtained encouraging results by extracting the cloud signature from pre-trained CNNs, such as AlexNet (Krizhevsky et al, 2012) and VGGNet (Simonyan and Zisserman, 2015). In addition, attempts have been made to simply exploit end-to-end CNN models for cloud categorization (Li et al, 2020;Liu et al, 2019;Zhang et al, 2018b). However, the insufficiency of labelled samples might make the network hard to converge in the training stage.…”
mentioning
confidence: 99%