2020
DOI: 10.1016/j.asoc.2020.106302
|View full text |Cite
|
Sign up to set email alerts
|

Robust fusion for RGB-D tracking using CNN features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 39 publications
0
12
0
Order By: Relevance
“…The hidden layers usually consist of convolutional layers, ReLU layers, pooling layers, and fully connected layers [35] , [36] . CNN’s represents a huge breakthrough in automatic image classification systems as there is no need for pre-processing the images, that was needed in traditional machine learning algorithms [37] , [38] , [39] .…”
Section: Introductionmentioning
confidence: 99%
“…The hidden layers usually consist of convolutional layers, ReLU layers, pooling layers, and fully connected layers [35] , [36] . CNN’s represents a huge breakthrough in automatic image classification systems as there is no need for pre-processing the images, that was needed in traditional machine learning algorithms [37] , [38] , [39] .…”
Section: Introductionmentioning
confidence: 99%
“…This has drawn the attention of scholars focusing on how to introduce depth information into a target tracking algorithm and improve its performance. For example, Wang Y. et al ( 2020 ) proposed a robust fusion-based RGB-D tracking method that integrates depth data into a visual object tracker to achieve the robust tracking of a target. In addition, Xiao et al ( 2019 ) proposed a new tracking method that uses a kernel support vector machine (SVM) online learning classifier to detect and track a specific target in a single RGB-D sensor.…”
Section: Introductionmentioning
confidence: 99%
“…It should be pointed out that most of the proposals use Red Green Blue Depth (RGB-D) cameras to detect people in the environment. For instance, the authors in [2] propose a solution based on a RGB-D camera combining RGB and depth data to gather input data for a segmentation CNN. Other researchers combine data from several sensors.…”
Section: Introductionmentioning
confidence: 99%