In order to better make investment decisions for future space processing, we are equipping an architectures laboratory to investigate the power and computing performance of candidate computing architectures for future space applications. The picture for future space processing is increasingly complicated by ever increasing data rates/sizes and limited communications bandwidth, both of which will require more data processing, in the form of either data reduction or compression, to be performed on orbit rather than on the ground. Candidate architectures for the laboratory are being drawn from a range of COTS processing architectures including low power multicore processors, FPGAs, and GPUs. As the drivers for these investments are likely to be data-intensive image processing applications, we have selected two representative applications, Synthetic Aperture Radar (SAR) and Hyper-Temporal Imaging (HTI), and tested them on a variety of low-power multicore processors, and for comparison, on modern conventional processors. Both applications were parallelized using OpenMP and/or pthreads. The processors employed include from four to eight cores. State-of-the-art numerical libraries were used to extract the most performance possible. The multi-core processors selected included examples of both homogeneous and heterogeneous computing architectures. Effects of varying the parameters such as the amount of memory made available to the processors, which affects how data decomposition is accomplished, are also studied. In general, homogeneous computing architectures performed better than heterogeneous ones. In some cases, better performance could be achieved with a single processor core with large memory than with multiple processors. These results are a function of the employed algorithm's ability to efficiently utilize architecture features, and cannot be attributed to all application/architecture pairings, thus highlighting the need for a concerted effort to explore processing requirements for future space missions.
Computer vision with a single-pixel camera is currently limited by a trade-off between reconstruction capability and image classification accuracy. If random projections are used to sample the scene, then reconstruction is possible but classification accuracy suffers, especially in cases with significant background signal. If data-driven projections are used, then classification accuracy improves and the effect of the background is diminished, but image recovery is not possible. Here, we employ a shallow neural network to nonlinearly convert from measurements acquired with random patterns to measurements acquired with data-driven patterns. The results demonstrate that this improves classification accuracy while still allowing for full reconstruction.
This work addresses image degradation introduced by lossy compression techniques and the effects of such degradation on signal detection statistics for applications in fast-framing (>100 Hz) IR image analysis. As future space systems make use of increasingly higher pixel count IR focal plane arrays, data generation rates are anticipated to become too copious for continuous download. The prevailing solution to this issue has been to compress image data prior to downlink. While this solution is application independent for lossless compression, the expected benefits of lossy compression, including higher compression ratio, necessitate several application specific trades in order to characterize preservation of critical information within the data. Current analyses via standard statistical image processing techniques following tunably lossy compression algorithms (JPEG2000, JPEG-LS) allow for detection statistics nearly identical to analyses following standard lossless compression techniques, such as Rice and PNG, even at degradation levels offering a greater than twofold increase in compression ratio. Ongoing efforts focus on repeating the analysis for other tunably lossy compression techniques while also assessing the relative computational burden of each algorithm. Current results suggest that lossy compression techniques can preserve critical information in fast-framing IR data while either significantly reducing downlink bandwidth requirements or significantly increasing the usable focal plane array window size.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.