Multimodal hyperspectral point cloud which has both spectral features and three-dimensional spatial features, has presented great potential in various remote sensing applications. However, direct acquisition of this multimodal data is expensive and difficult as for the rareness of this type of imaging equipment. Moreover, acquisition through heterogeneous fusion of independently collected hyperspectral image and point cloud data brings great challenges. In this study, a multimodal hyperspectral point cloud data generation method is proposed using data-driven learning based deep learning approach with only a single RGB image. The whole network unifies spectral super-resolution reconstruction, monocular 3D reconstruction and data fusion to generate hyperspectral point cloud. The quality of the generated data is testified through experiments on practical plants. Both single modality and multi-modality data quality are evaluated by estimation of the growth status of plants. In this paper, hyperspectral point cloud obtained though low-cost RGB imaging not only avoids the independent acquisition of single modality data using expensive professional equipment, and also gets rid of challenging fusion of the multi-source heterogeneous data. Of more importance, it renders simultaneous acquisition of multidimensional data of high resolution with respect to both spectral and spatial information of the target. The synchronous acquisition of rich spectral and physical geometry information throws light on the comprehensive understanding of the physical and biochemical information of the target object.
Multimodal hyperspectral point cloud which has both spectral features and three-dimensional spatial features, has presented great potential in various remote sensing applications. However, direct acquisition of this multimodal data is expensive and difficult as for the rareness of this type of imaging equipment. Moreover, acquisition through heterogeneous fusion of independently collected hyperspectral image and point cloud data brings great challenges. In this study, a multimodal hyperspectral point cloud data generation method is proposed using data-driven learning based deep learning approach with only a single RGB image. The whole network unifies spectral super-resolution reconstruction, monocular 3D reconstruction and data fusion to generate hyperspectral point cloud. The quality of the generated data is testified through experiments on practical plants. Both single modality and multi-modality data quality are evaluated by estimation of the growth status of plants. In this paper, hyperspectral point cloud obtained though low-cost RGB imaging not only avoids the independent acquisition of single modality data using expensive professional equipment, and also gets rid of challenging fusion of the multi-source heterogeneous data. Of more importance, it renders simultaneous acquisition of multidimensional data of high resolution with respect to both spectral and spatial information of the target. The synchronous acquisition of rich spectral and physical geometry information throws light on the comprehensive understanding of the physical and biochemical information of the target object.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.