Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA), suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU) and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA)-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.
Hyperspectral Imaging (HSI) techniques have demonstrated potential to provide useful information in a broad set of applications in different domains, from precision agriculture to environmental science. A first step in the preparation of the algorithms to be employed outdoors starts at a laboratory level, capturing a high amount of samples to be analysed and processed in order to extract the necessary information about the spectral characteristics of the studied samples in the most precise way. In this article, a custom-made scanning system for hyperspectral image acquisition is described. Commercially available components have been carefully selected in order to be integrated into a flexible infrastructure able to obtain data from any Generic Interface for Cameras (GenICam) compliant devices using the gigabyte Ethernet interface. The entire setup has been tested using the Specim FX hyperspectral series (FX10 and FX17) and a Graphical User Interface (GUI) has been developed in order to control the individual components and visualise data. Morphological analysis, spectral response and optical aberration of these pushbroom-type hyperspectral cameras have been evaluated prior to the validation of the whole system with different plastic samples for which spectral signatures are extracted and compared with well-known spectral libraries.
Linear spectral unmixing is one of the nowadays hottest research topics within the hyperspectral imaging community, being a proof of this fact the vast amount of papers that can be found in the scientific literature about this challenging task. A subset of these works is devoted to the acceleration of previously published unmixing algorithms for application under tight time constraints. For this purpose, hyperspectral unmixing algorithms are typically implemented onto high-performance computing architectures in which the operations involved are executed in parallel, which conducts to a reduction in the time required for unmixing a given hyperspectral image with respect to the sequential version of these algorithms. The speedup factors that can be achieved by means of these high-performance computing platforms heavily depend on the inherent level of parallelism of the algorithms to be executed onto them. However, the majority of the state-of-the-art unmixing algorithms were not originally conceived for being parallelized in an ulterior stage, which clearly restricts the amount of acceleration that can be reached. As far as advanced hyperspectral sensors have increasingly high spatial, spectral, and temporal resolutions, it is hence mandatory to follow a new approach that consists of developing a new class of highly parallel unmixing solutions that can take full advantage of the characteristics of nowadays high-performance computing architectures. This paper represents a step forward toward this direction as it proposes a new parallel algorithm for fully unmixing a hyperspectral image together with its implementation onto two different NVIDIA graphic processing units (GPUs). The results obtained reveal that our proposal is able to unmix hyperspectral images with very different spatial patterns and size better and much faster than the best GPU-based unmixing chains up-to-date published, with independence of the characteristics of the selected GPU.Index Terms-Compute unified device architecture (CUDA), graphic processing unit (GPU), high-performance computing, hyperspectral unmixing, parallel programming.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.