Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
Artificial Intelligence and Machine Learning in Defense Applications 2019
DOI: 10.1117/12.2532794
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal object detection using unsupervised transfer learning and adaptation techniques

Abstract: Deep neural networks achieve state-of-the-art performance on object detection tasks with RGB data. However, there are many advantages of detection using multi-modal imagery for defence and security operations. For example, the IR modality offers persistent surveillance and is essential in poor lighting conditions and 24hr operation. It is, therefore, crucial to create an object detection system which can use IR imagery. Collecting and labelling large volumes of thermal imagery is incredibly expensive and time-… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(7 citation statements)
references
References 21 publications
0
7
0
Order By: Relevance
“…However, fine-tuning is not always possible due to lack of labelled data. Paper [2] propose a novel architecture for LWIR detection in an unsupervised manner for the first time. They use adaptation techniques (previously used within the RGB domain for classification tasks) for creating modality invariant features in a faster RCNN network for improving LWIR detection.…”
Section: Related Work 21 Ir Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, fine-tuning is not always possible due to lack of labelled data. Paper [2] propose a novel architecture for LWIR detection in an unsupervised manner for the first time. They use adaptation techniques (previously used within the RGB domain for classification tasks) for creating modality invariant features in a faster RCNN network for improving LWIR detection.…”
Section: Related Work 21 Ir Detectionmentioning
confidence: 99%
“…Since we aim to find where the objects are in an unsupervised manner, without requiring any label, we use the output of the detection algorithm rather than ground truth. The real LWIR and RGB images pass through an RGB-trained faster RCNN network adapted for LWIR in an unsupervised manner described in [2]. Although this network is adapted for LWIR imagery, it maintains its RGB performance, so it can be used in both modalities simultaneously.…”
Section: Object-specific Cyclegan Networkmentioning
confidence: 99%
“…Abbott, et al [8] uses RGB and infrared cameras to recognise pedestrians ( Figure 2.44). It employs a loss function which uses output from two neural network from both sensors, thus leading to a better detection rate.…”
Section: Videomentioning
confidence: 99%
“…This project aims to develop an autonomous car perception system combining low-THz radar, video and LiDAR for adverse weather scenarios. This thesis was developed as part of the Pervasive low-TeraHz and Video Sensing for Car Autonomy and Driver Assistance project (PATH CAD), which is a collaboration between Heriot-Watt University, Infrared is an alternative sensor for day and night perception [6][7][8]. Objects emit heat radiation based on their temperature.…”
Section: Chapter 1 Introductionmentioning
confidence: 99%
See 1 more Smart Citation