2019
DOI: 10.1016/j.patcog.2019.106977
|View full text |Cite
|
Sign up to set email alerts
|

RGB-T object tracking: Benchmark and baseline

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
224
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 333 publications
(224 citation statements)
references
References 57 publications
0
224
0
Order By: Relevance
“…For the RGB-T target-driven global attention network, we use GTOT-50 [19] as training dataset and tracking on RGBT-234 dataset [3], and also use RGBT-234 dataset for training and test on GTOT-50 dataset. Following [4], the ground truth binary mask used for training is generated from corresponding training dataset.…”
Section: Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…For the RGB-T target-driven global attention network, we use GTOT-50 [19] as training dataset and tracking on RGBT-234 dataset [3], and also use RGBT-234 dataset for training and test on GTOT-50 dataset. Following [4], the ground truth binary mask used for training is generated from corresponding training dataset.…”
Section: Trainingmentioning
confidence: 99%
“…Recently, some researchers resort to other data to help improve the robustness of visual trackers, such as thermal images [3], natural language description [4] and depth images [5]. Compared to text and depth images, thermal sensor is not sensitive to lighting condition and can capture the target object at far distance, and it still works well at night while RGB, depth or text may failed.…”
Section: Introductionmentioning
confidence: 99%
“…We initialize parameters of our GA using the pre-trained model in VGG-M [30], and then fine-tune it using RGBT dataset. Note that when we conduct testing on the GTOT dataset [17], we fine-tune GA using the RGBT234 dataset [18], and vice versa. We use the stochastic gradient descent (SGD) algorithm [14] to train GA, and set the learning parameters as follows.…”
Section: Progressive Learning Algorithmmentioning
confidence: 99%
“…In this section, we will compare our MANet with state-of-the-art RGB and RGBT tracking methods on two RGBT tracking benchmark datasets, GTOT [17] and RGBT234 [18], and then evaluate the main components of MANet in detail for better understanding of our approach.…”
Section: Performance Evaluationmentioning
confidence: 99%
“…It was found that thermal infrared sensors provide a more stable signal for these scenarios. Therefore, RGB-T tracking has drawn more research attention recently [31,34,32,35].…”
Section: Introductionmentioning
confidence: 99%