2022
DOI: 10.3390/app12188972
|View full text |Cite
|
Sign up to set email alerts
|

Deep Residual Learning for Image Recognition: A Survey

Abstract: Deep Residual Networks have recently been shown to significantly improve the performance of neural networks trained on ImageNet, with results beating all previous methods on this dataset by large margins in the image classification task. However, the meaning of these impressive numbers and their implications for future research are not fully understood yet. In this survey, we will try to explain what Deep Residual Networks are, how they achieve their excellent results, and why their successful implementation i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
144
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 289 publications
(145 citation statements)
references
References 84 publications
0
144
0
1
Order By: Relevance
“…Figure 17 is a demonstration of live testing. Because the object is not moving in this type of static object detection module testing, the system successfully detected the objects and recognized the detected object classes ( Shafiq et al, 2020 ; Shafiq and Gu, 2022 ; Wahab et al, 2022 ).…”
Section: Experimental Results and Evaluationmentioning
confidence: 99%
“…Figure 17 is a demonstration of live testing. Because the object is not moving in this type of static object detection module testing, the system successfully detected the objects and recognized the detected object classes ( Shafiq et al, 2020 ; Shafiq and Gu, 2022 ; Wahab et al, 2022 ).…”
Section: Experimental Results and Evaluationmentioning
confidence: 99%
“…Each Transformer block includes two sub-blocks: multi-head self-attention mechanism and position-wise feed-forward networks. In addition, each sub-block also includes layer normalization modules [ 39 ] and residual connectors [ 40 ]. Transformer blocks can be used in a different field.…”
Section: Methodsmentioning
confidence: 99%
“…We also applied GLU to the Attention sub-block, as shown in Figure 4 . Here, we used the residual connection [ 40 ] to fuse the context information through the self-attention module, which can improve the stability of the model. The fused information is passed through GLU to improve the performance of the Attention sub-block.…”
Section: Methodsmentioning
confidence: 99%
“…The training process was set up to facilitate comparison between different models after undergoing end-to-end finetuning. Only ResNet50 was used for the backbones, as is standard in self-supervised model evaluation and as was used in both the NNCLR and SimCLR original papers [40, 41, 46]. Two backbones for the end-to-end process were chosen from a pretraining sweep with the mentioned self-supervised contrastive architectures, and one backbone was initialized with ImageNet weights.…”
Section: Methodsmentioning
confidence: 99%