2016
DOI: 10.1175/jtech-d-15-0015.1
|View full text |Cite
|
Sign up to set email alerts
|

mCLOUD: A Multiview Visual Feature Extraction Mechanism for Ground-Based Cloud Image Categorization

Abstract: In this paper, a novel Multiview CLOUD (mCLOUD) visual feature extraction mechanism is proposed for the task of categorizing clouds based on ground-based images. To completely characterize the different types of clouds, mCLOUD first extracts the raw visual descriptors from the views of texture, structure, and color simultaneously, in a densely sampled way-specifically, the scale invariant feature transform (SIFT), the census transform histogram (CENTRIST), and the statistical color features are extracted, resp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
18
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(19 citation statements)
references
References 28 publications
(65 reference statements)
1
18
0
Order By: Relevance
“…Therefore, our proposed classification approach achieves near-perfect classification accuracy for most categories. Table 3 displays the experimental classification results, giving overall score rates ranging from 87% to 91%, which are superior to the state-of-the-art methods (Xiao et al, 2016;Ye et al, 2017) with average scores of 81% and 87%, respectively. Moreover, the average score is 88%, which indicates the effectiveness and generalization of CloudNet.…”
Section: Experimental Configurationmentioning
confidence: 99%
“…Therefore, our proposed classification approach achieves near-perfect classification accuracy for most categories. Table 3 displays the experimental classification results, giving overall score rates ranging from 87% to 91%, which are superior to the state-of-the-art methods (Xiao et al, 2016;Ye et al, 2017) with average scores of 81% and 87%, respectively. Moreover, the average score is 88%, which indicates the effectiveness and generalization of CloudNet.…”
Section: Experimental Configurationmentioning
confidence: 99%
“…(3) LBP [40]: The local binary pattern (LBP) labels each pixel by computing the sign of the difference between the intensities of that pixel and its neighboring pixels. In our experiments, we utilize the uniform invariant LBP and set the parameter (P, R) to (8, 1), (16,2), and (24, 3), respectively. Here, P is the total number of involved neighbors in a circle and R is the radius of the circle.…”
Section: Baselinesmentioning
confidence: 99%
“…We combine these three components into joint distribution s to obtain completed cloud representation. The parameter (P, R) is also set to (8, 1), (16,2), and (24, 3), respectively. We concatenate the three scales into one feature vector resulting in a 2200 dimensional vector.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to satellite-based instruments, ground-based remote sensing instruments, for example, whole-sky imager and total-sky imager (Long et al, 2006), could capture high-resolution ground-based cloud images, which provides new opportunities for monitoring and understanding regional sky conditions. Benefiting from these ground-based cloud images, considerable approaches are proposed to implement ground-based cloud classification task using hand-crafted features, such as texture, color, structure, and so on (Kazantzidis et al, 2012;Liu et al, 2011;Xiao et al, 2016;Zhuo et al, 2014). Recently, witnessing the success of deep learning in a variety of research fields (Gao et al, 2018;He et al, 2019;Labati et al, 2019;Milletari et al, 2016;Shi et al, 2016), numerous methods are proposed to learn robust and discriminative deep features for automatic ground-based cloud classification in the framework of convolutional neural network (CNN).…”
Section: Introductionmentioning
confidence: 99%