2020
DOI: 10.1049/iet-ipr.2020.0726
|View full text |Cite
|
Sign up to set email alerts
|

Residual‐wider convolutional neural network for image recognition

Abstract: Recent works show that the performance of convolutional neural networks (CNNs) could be improved by making the network wider and introducing residual connections. For example, the Inception architecture is one of the classical models. However, the structure of the series of Inception is complex, and there are more convolution layers and existing redundant image feature information. In order to further improve the performance and observability of CNNs, a novel type of network structure based on two modules, wid… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(1 citation statement)
references
References 16 publications
(19 reference statements)
0
0
0
Order By: Relevance
“…Reference [5] proposed a network architecture based on two modules: wider modules and residual modules, which fully exploited image information to learn richer features and achieved better recognition results on four classical open datasets. Reference [6] introduced a multi-branch cross-connection CNN, which effectively fused features from various branches, improving recognition performance.…”
mentioning
confidence: 99%
“…Reference [5] proposed a network architecture based on two modules: wider modules and residual modules, which fully exploited image information to learn richer features and achieved better recognition results on four classical open datasets. Reference [6] introduced a multi-branch cross-connection CNN, which effectively fused features from various branches, improving recognition performance.…”
mentioning
confidence: 99%