Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth's surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA), is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.
This paper presents the modeling, design, and implementation of two intellectual property (IP) cores that are compliant with the consultative committee for space data systems (CCSDS) 121.0-B-2 and CCSDS 123.0-B-1 lossless satellite image compression standards. The CCSDS 121.0-B-2 describes a lossless universal compressor based on a Rice adaptive encoding. The CCSDS 123.0-B-1 standard describes a lossless algorithm specifically designed for efficient on-board compression of hyperspectral and multispectral images, and it is based on a prediction and entropy-based encoding structure. Two options are offered for the latter: the sample-adaptive and the block-adaptive encoder, which corresponds to the CCSDS 121.0-B-2 algorithm. These IP cores have been designed as independent compressors, but they can be easily combined in a plug-and-play fashion to be used together thanks to a dedicated interface. Additionally, standard interfaces are provided for configuration and external memory access. The design process encompasses the consideration of several different hardware architectures in order to maximize throughput and optimize the requirements of on-board resources at the same time. Both IPs are compliant with the high degree of configurability considered in the standard. The obtained VHDL code is completely technology independent, so it can be used to target any field-programmable gate array (FPGA) or ASIC of interest in the space environment, aiming to perform efficiently compression in satellites despite the inherent
In this paper, we present the design, implementation and results of a set of IP cores that perform on-board hyperspectral image compression according to the CCSDS 123.0-B-1 lossless standard, specifically designed to be suited for on-board systems and for any kind of hyperspectral sensor. As entropy coder, the sample-adaptive entropy coder defined in the 123.0-B-1 standard or the low-complexity blockadaptive encoder defined by the CCSDS 121.0-B-2 lossless standard could be used. Both IPs, 123.0-B-1 and 121.0-B-2, are part of SHyLoC 2.0, and can be used together for compression of hyperspectral images, being also possible the compression of any kind of data using only the 121-IP. SHyLoC 2.0 improves and extends the capabilities of SHyLoC 1.0, currently available at the ESA IP Cores library, increasing its compression efficiency and throughput, without compromising the resources footprint. Moreover, it incorporates new features, such as the unit-delay predictor option defined by the CCSDS 121.0-B-2 standard, and burst capabilities in the external memory interface of the CCSDS 123-IP, among others. Dedicated architectures have been designed for all the possible input image sample arrangements, in order to maximise throughput and reduce the hardware resources utilization. The design is technology-agnostic, enabling the mapping of the VHDL code in different FPGAs or ASICs. Results are presented for a representative group of well-known space-qualified FPGAs, including the new NanoXplore BRAVE family. A maximum throughput of 150 MSamples/s is obtained for Xilinx Virtex XQR5VFX130 when the SHyLoC 2.0 CCSDS-123 IP is configured in Band-Interleaved by Pixel (BIP) order, using only the 4% of LUTs and less than the 1% of internal memory. INDEX TERMS Hyperspectral imaging, compression algorithms, field programmable gate arrays, hardware implementations, space missions, on-board data processing.
Hyperspectral data processing is a computationally intensive task that is usually performed in high-performance computing clusters. However, in remote sensing scenarios, where communications are expensive, a compression stage is required at the edge of data acquisition before transmitting information to ground stations for further processing. Moreover, hyperspectral image compressors need to meet minimum performance and energy-efficiency levels to cope with the real-time requirements imposed by the sensors and the available power budget. Hence, they are usually implemented as dedicated hardware accelerators in expensive space-grade electronic devices. In recent years though, these devices have started to coexist with low-cost commercial alternatives in which unconventional techniques, such as run-time hardware reconfiguration are evaluated within research-oriented space missions (e.g., CubeSats). In this paper, a run-time reconfigurable implementation of a low-complexity lossless hyperspectral compressor (i.e., CCSDS 123) on a commercial off-the-shelf device is presented. The proposed approach leverages an FPGA-based on-board processing architecture with a data-parallel execution model to transparently manage a configurable number of resource-efficient hardware cores, dynamically adapting both throughput and energy efficiency. The experimental results show that this solution is competitive when compared with the current state-of-theart hyperspectral compressors and that the impact of the parallelization scheme on the compression rate is acceptable when considering the improvements in terms of performance and energy consumption. Moreover, scalability tests prove that run-time adaptation of the compression throughput and energy efficiency can be achieved by modifying the number of hardware accelerators, a feature that can be useful in space scenarios, where requirements change over time (e.g., communication bandwidth or power budget). INDEX TERMS Data compression, dynamic and partial reconfiguration, FPGAs, high-performance embedded computing, hyperspectral images, on-board processing.
One of the traditional issues in space missions is the reliability of the electronic components on board spacecraft. There are numerous techniques to deal with this, from shielding and rad-hard fabrication to ad-hoc fault-tolerant designs. Although many of these solutions have been extensively studied, the recent utilization of FPGAs as the target architecture for many electronic components has opened new possibilities, partly due to the distinct nature of these devices. In this study, we performed fault injection experiments to determine if a RISC-V soft processor implemented in an FPGA could be used as an onboard computer for space applications, and how the specific nature of FPGAs needs to be tackled differently from how ASICs have been traditionally handled. In particular, in this paper, the classic definition of the cross-section is revisited, putting into perspective the importance of the so-called "critical bits" in an FPGA design.Electronics 2020, 9, 175 2 of 12 physically shielding the devices to deflect radiation or fabricate them with so-called rad-hard processes are some alternatives. This type of approach usually implies hefty overheads in terms of cost, area, performance, and power consumption. Another approach consists in protecting the circuits utilizing design techniques, usually adding redundancy [4]. In this case, techniques range from classic schemes such as dual modular redundancy (DMR) or triple modular redundancy (TMR) to ad-hoc techniques that use behavioral or structural properties of the circuits to protect.In any case, the effects of radiation and the most appropriate technique to deal with them strongly depend on the architecture of the circuit. Traditionally, manufacturing an application-specific integrated circuit (ASIC) has been the most usual way of implementing electronic circuits, since they used to provide the best possible performance and power consumption. Errors produced by radiation on ASICs usually come in the shape of bit flips induced in the storage elements or by transients that propagate through the circuit, which can eventually be registered by a storage element. In both cases, errors can be modeled as propagation of logic values through combinational and/or sequential nets [5]. However, in recent times, field-programmable gate arrays (FPGAs) are steadily becoming the predominant architecture to implement digital circuits in space applications, especially those related to low-cost missions, such as small satellites. The advantage of FPGAs is that they offer a reduced cost, together with high flexibility in terms of reconfiguration capability. Besides, the performance of FPGAs has improved enormously, being appropriate for most kinds of applications. However, SRAM-based FPGA architectures (hereinafter referred to as FPGAs) are quite different from the ASIC ones. In these FPGAs, although the user logic is still vulnerable to radiation, the configuration memory is also vulnerable, and may sometimes be the predominant source of errors, mainly due to its size [6]. If this h...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.