2020
DOI: 10.1109/jstars.2020.3002885
|View full text |Cite
|
Sign up to set email alerts
|

Multitask Multisource Deep Correlation Filter for Remote Sensing Data Fusion

Abstract: With the amount of remote sensing data increasing at an extremely fast pace, machine learning based technique has been shown to perform superiorly in many applications. However, most of existing methods in real-time application are based on single modal image data. Although a few approaches use the different source images to represent the object via fusion scheme, it may not be appropriate for multi-modality information processing. In addition, these methods hardly benefit from the end-to-end network training … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 61 publications
0
5
0
Order By: Relevance
“…[169] has 2 articles in the medical domain. In total, 60 articles were found related to the domain agnostic domain, where, 16 were in classification [170]- [185], 9 were in detection [186]- [194], 11 were in analysis [195]- [205], 8 were in recognition [206]- [213], 9 were in prediction [214]- [222], 1 was in language processing [223], 2 were in image processing [224], [225], 1 was in image retrieval [47], 2 were in integration [226], [227] and 1 was in segmentation [228]. Of 43 articles in the human activity domain, 20 were in recognition [229]- [248], 8 were in detection [249]- [256], 5 were in classification [257]- [261], 4 were in analysis [262]- [265], 3 were in identification [266]- [268] and 1 was in comparison [269], monitoring [270] and assessment [271].…”
Section: Inclusion Criteriamentioning
confidence: 99%
See 1 more Smart Citation
“…[169] has 2 articles in the medical domain. In total, 60 articles were found related to the domain agnostic domain, where, 16 were in classification [170]- [185], 9 were in detection [186]- [194], 11 were in analysis [195]- [205], 8 were in recognition [206]- [213], 9 were in prediction [214]- [222], 1 was in language processing [223], 2 were in image processing [224], [225], 1 was in image retrieval [47], 2 were in integration [226], [227] and 1 was in segmentation [228]. Of 43 articles in the human activity domain, 20 were in recognition [229]- [248], 8 were in detection [249]- [256], 5 were in classification [257]- [261], 4 were in analysis [262]- [265], 3 were in identification [266]- [268] and 1 was in comparison [269], monitoring [270] and assessment [271].…”
Section: Inclusion Criteriamentioning
confidence: 99%
“…[98], [100], [104], [116], [125], [127], [187], [206], [241], [326], [365], [395], [411], Image & Numerical [62], [75], [119], [126], [167], [313], [331], [353], [405], [410], Audio & Text & Sensor [384], Audio & Text [180], [282], [377], [391], [392], Text & Signal [109], Text & Numerical [304], [349], Sensor & Signal [240], [242], [258], [389], Sensor & Numerical [183], Signal & Numerical [205], [257], [260], [318]. Figure 10 displays the extracted information related to each modality and data type with the links between them.…”
Section: B Taskmentioning
confidence: 99%
“…Indeed, the huge amount of data makes the use of Deep Neural Network (DNN) models possible. Many effective multi-task approaches have been recently developed to train DNN models on some large-scale remote-sensing benchmarks (e.g., (Cheng et al 2020, Carvalho et al 2019, Chen et al 2017). The aims of these multi-task methods is to learn an embedding space from different sensors (i.e., task).…”
Section: Previous Workmentioning
confidence: 99%
“…Due to this visual object tracking, a possibility of achieving multiple difficulties including blur, partial occlusion, camera motion, scale variation, illumination variation, and background clutter. In unconstrained situations, achieving efficient and reliable tracking remains a difficult task [3].…”
Section: Introductionmentioning
confidence: 99%