2022
DOI: 10.1007/s00371-022-02488-0
|View full text |Cite
|
Sign up to set email alerts
|

NIR/RGB image fusion for scene classification using deep neural networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 83 publications
0
5
0
Order By: Relevance
“…Due to the success of deep learning models across diverse computer vision tasks, researchers have also employed various deep learning models for indoor scene classification tasks. Soroush et al [29] introduced a novel fusion method that leverages near-infrared (NIR) and RGB data to enhance scene recognition and classification. Labinghisa et al [26] presented a scene recognition method based on image-based location awareness (IILAA), and clustering algorithms.…”
Section: Deep Learning Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the success of deep learning models across diverse computer vision tasks, researchers have also employed various deep learning models for indoor scene classification tasks. Soroush et al [29] introduced a novel fusion method that leverages near-infrared (NIR) and RGB data to enhance scene recognition and classification. Labinghisa et al [26] presented a scene recognition method based on image-based location awareness (IILAA), and clustering algorithms.…”
Section: Deep Learning Modelsmentioning
confidence: 99%
“…Wozniak et al [28] introduced a deep neural network algorithm for indoor place recognition using transfer learning to classify images from a humanoid robot. Soroush et al [29] presented new fusion techniques for scene recognition and classification, utilizing both NIR and RGB sensor data. Heikel et al [30] presented a novel approach by employing an object detector (YOLO) to detect indoor objects, which are then used as features for predicting room categories.…”
Section: Introductionmentioning
confidence: 99%
“…Fully connected layers further transform and abstract the fused features, leading to an output layer that generates the final fused image. This architecture allows the network to leverage the strengths of each modality and enhance the understanding of the scene, making it a valuable tool in various applications [21].…”
Section: B Data Preprocessingmentioning
confidence: 99%
“…Literature [22] proposed a multi-feature fusion method based on weighted sequence fusion to obtain fused feature vectors of active and passive millimeter-wave urban and rural areas, which is applied with millimeterwave imaging target recognition, and the performance of this method is proved to be better than that based on the original feature vectors. Literature [23] adopts the method of RGB and NIR image fusion in order to improve the ability of scene classification, on the basis of which it proposes the technique based on the improvement of visually salient points, and after simulation experiments, it is proved that the comprehensive performance of this method has been improved to the original method. Literature [24] envisioned an LSTM network modeling mechanism built under the Tensorflow framework, which is more portable compared to the AR model, as reflected in the faster and simpler data input.…”
Section: Introductionmentioning
confidence: 99%