2022
DOI: 10.14569/ijacsa.2022.0131065
|View full text |Cite
|
Sign up to set email alerts
|

Deep Architecture based on DenseNet-121 Model for Weather Image Recognition

Abstract: Weather conditions have a significant effect on humans' daily lives and production, ranging from clothing choices to travel, outdoor sports, and solar energy systems. Recent advances in computer vision based on deep learning methods have shown notable progress in both scene awareness and image processing problems. These results have highlighted network depth as a critical factor, as deeper networks achieve better outcomes. This paper proposes a deep learning model based on DenseNet-121 to effectively recognize… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 25 publications
0
7
0
Order By: Relevance
“…This can alleviate overfitting issues and enable training with smaller datasets [24]. In this study, the researcher utilized DenseNet121, a basic variant of the DenseNet architecture consisting of 121 layers, as depicted in Figure 5 [25]. In practice, DenseNet is commonly employed in tasks related to image recognition, such as image classification, object detection, and image segmentation.…”
Section: Densenetmentioning
confidence: 99%
“…This can alleviate overfitting issues and enable training with smaller datasets [24]. In this study, the researcher utilized DenseNet121, a basic variant of the DenseNet architecture consisting of 121 layers, as depicted in Figure 5 [25]. In practice, DenseNet is commonly employed in tasks related to image recognition, such as image classification, object detection, and image segmentation.…”
Section: Densenetmentioning
confidence: 99%
“…Currently, several pre-trained DNN models exist that have the potential as baseline models to develop new models utilizing the TL approach, such as MobileNet [28], [37], MobileNetV2 [38], [39], EfficientNetB0, EfficientNetB1, EfficientNetB2 [29], [40] DenseNet121 [41], Xception [42], InceptionV3 [43], ResNet50 [44], and InceptionResNetV2 [45]. Each of these models offers its own set of advantages and limitations.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, TL improves generalisation, cuts down Figure 3: Architecture of Baseline EfficientNet-B0 [15] on overfitting, training time, and requires less labelled data. Recent years have seen extensive use of TL in computer vision [8][16] [17].…”
Section: E Transfer Learningmentioning
confidence: 99%