2022
DOI: 10.3390/f13122072
|View full text |Cite
|
Sign up to set email alerts
|

Transfer Learning for Leaf Small Dataset Using Improved ResNet50 Network with Mixed Activation Functions

Abstract: Taxonomic studies of leaves are one of the most effective means of correctly identifying plant species. In this paper, mixed activation function is used to improve the ResNet50 network in order to further improve the accuracy of leaf recognition. Firstly, leaf images of 15 common tree species in northern China were collected from the Urban Forestry Demonstration Base of Northeast Forestry University (45°43′–45°44′ N, 126°37′–126°38′ E, forest type was artificial forest), and a small leaf dataset was establishe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…The pre-processed dataset will be a source for input into each CNN algorithm with various architectures, shown in Figure 4. There is the CNN algorithm with simple architectures [18], the CNN algorithm with a residual network-50 (ResNet50) architecture [19], [20], the CNN algorithm with a visual geometry group (VGG) 16 architecture [21], the CNN algorithm with convolutional neural networks for mobile vision applications (MobileNet) architecture [22] and CNN algorithm with an inception architecture [23]. Comparison graphically, the visualization CNN model architectures are simple architectures shown in Figure 4…”
Section: Modelingmentioning
confidence: 99%
“…The pre-processed dataset will be a source for input into each CNN algorithm with various architectures, shown in Figure 4. There is the CNN algorithm with simple architectures [18], the CNN algorithm with a residual network-50 (ResNet50) architecture [19], [20], the CNN algorithm with a visual geometry group (VGG) 16 architecture [21], the CNN algorithm with convolutional neural networks for mobile vision applications (MobileNet) architecture [22] and CNN algorithm with an inception architecture [23]. Comparison graphically, the visualization CNN model architectures are simple architectures shown in Figure 4…”
Section: Modelingmentioning
confidence: 99%
“…IV-A). As our first contribution is to provide a comparison between model complexity and its effect on robustness against attacks, we use five well-known pre-trained networks, such as LeNet5 [65], MobileNetV1 [66], VGG16 [67], ResNet50 [68], and InceptionV3 [69]. The first represents the model with the lowest computational cost compared to the others.…”
Section: Proposed Framework-based Secure Cnnmentioning
confidence: 99%
“…Figure 3 shows the architectural design of the Deeplabv3+ model, a building segmentation approach, with ResNet50 as the primary backbone, based on prior research for feature extraction [39][40][41].…”
Section: Improved Deeplabv3+ Module In Dual-channel Generatormentioning
confidence: 99%