2021
DOI: 10.3390/s21093185
|View full text |Cite
|
Sign up to set email alerts
|

Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches

Abstract: Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervise… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 29 publications
(67 reference statements)
0
3
0
Order By: Relevance
“…It uses unlabeled samples to improve prediction accuracy. In the cotraining process, random sampling is used to gradually select unlabeled samples to train classifiers [ 39 ]. An algorithm flowchart of cotraining is shown in Figure 2 .…”
Section: Methodsmentioning
confidence: 99%
“…It uses unlabeled samples to improve prediction accuracy. In the cotraining process, random sampling is used to gradually select unlabeled samples to train classifiers [ 39 ]. An algorithm flowchart of cotraining is shown in Figure 2 .…”
Section: Methodsmentioning
confidence: 99%
“…Co-learning can be parallel, non-parallel or hybrid depending on the training resourced. In [39], cotraining RGB-D and single modal are placed on a scale. The results show that multi-modal approach is more effective in the task of producing a pseudo-labeled object bounding boxes.…”
Section: B Multi-modal Learningmentioning
confidence: 99%
“…In previous works, we successfully applied a co-training pattern under the synth-to-real UDA setting for deep object detection [ 31 , 32 ]. This encourages us to address the challenging problem of semantic segmentation under the same setting by proposing a new co-training procedure, which is summarized in Figure 1 .…”
Section: Introductionmentioning
confidence: 99%