2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.00065
|View full text |Cite
|
Sign up to set email alerts
|

Online Convolutional Reparameterization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(15 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…This technique enables efficient training and deployment of deep learning models in scenarios with limited computational resources by using constant parameter transformations to reduce the storage and computational resources of the model through simplification of the network structure. As shown in Figure 5 , the earlier RepVGG model uses a simple architecture consisting of stacked 3*3 Conv and ReLU to achieve structural decoupling during training and inference, and uses a multi-branch structure during training, and then uses reparameterization to equivalently transform the multi-branch architecture to a VGG single-way architecture with stacked 3*3 convolutional layers after training was completed, using this structured reparameterization method to enable RepVGG to achieve ImageNet to achieve more than 80% accuracy and run several times faster ( Transactions of the Chinese Society of Agricultural Engineering et al., 2021 ; Hu et al., 2022 ).…”
Section: Tests and Methodsmentioning
confidence: 99%
“…This technique enables efficient training and deployment of deep learning models in scenarios with limited computational resources by using constant parameter transformations to reduce the storage and computational resources of the model through simplification of the network structure. As shown in Figure 5 , the earlier RepVGG model uses a simple architecture consisting of stacked 3*3 Conv and ReLU to achieve structural decoupling during training and inference, and uses a multi-branch structure during training, and then uses reparameterization to equivalently transform the multi-branch architecture to a VGG single-way architecture with stacked 3*3 convolutional layers after training was completed, using this structured reparameterization method to enable RepVGG to achieve ImageNet to achieve more than 80% accuracy and run several times faster ( Transactions of the Chinese Society of Agricultural Engineering et al., 2021 ; Hu et al., 2022 ).…”
Section: Tests and Methodsmentioning
confidence: 99%
“…Model Re-Parameterization. Model re-parameterization is a technique for improving the efficiency and performance of networks by merging multiple computational modules into a fully equivalent module at the inference stage [55] denotes the Hadamard product, ⊗ denotes the matrix product, and denotes the matrix addition.…”
Section: Training Optimizationmentioning
confidence: 99%
“…Model Re-Parameterization. Model re-parameterization is a technique for improving the efficiency and performance of networks by merging multiple computational modules into a fully equivalent module at the inference stage [55] Dynamic Label Assignment. If we follow the commonly hand-crafted label assignment strategy to assign the optimal anchor to each ground truth, there may be multiple ground truths corresponding to the same anchor, which has a significantly negative impact on network training.…”
Section: Training Optimizationmentioning
confidence: 99%
“…To solve the above problems, we propose a lightweight ship detection method based on Swin-YOLOFormer. YOLOv7 14 is the latest detection model in the YOLO series model, introduces the reparameterized 15 module into the network architecture, reduces the calculation of the model, and introduces the label allocation strategy and ELAN. 16 Finally, YOLOv7 proposes the training method of the auxiliary header to increase the training cost and improve the accuracy without affecting the time of reasoning, because the auxiliary head will only appear in the training process.…”
Section: Introductionmentioning
confidence: 99%