Modern hyperspectral imaging systems produce huge datasets potentially conveying a great abundance of information; such a resource, however, poses many challenges in the analysis and interpretation of these data. Deep learning approaches certainly offer a great variety of opportunities for solving classical imaging tasks and also for approaching new stimulating problems in the spatial–spectral domain. This is fundamental in the driving sector of Remote Sensing where hyperspectral technology was born and has mostly developed, but it is perhaps even more true in the multitude of current and evolving application sectors that involve these imaging technologies. The present review develops on two fronts: on the one hand, it is aimed at domain professionals who want to have an updated overview on how hyperspectral acquisition techniques can combine with deep learning architectures to solve specific tasks in different application fields. On the other hand, we want to target the machine learning and computer vision experts by giving them a picture of how deep learning technologies are applied to hyperspectral data from a multidisciplinary perspective. The presence of these two viewpoints and the inclusion of application fields other than Remote Sensing are the original contributions of this review, which also highlights some potentialities and critical issues related to the observed development trends.
The possibility to realize highly customized orthoses is receiving boost thanks to the widespread diffusion of low-cost 3D printing technologies. However, rapid prototyping (RP) with 3D printers is only the final stage of patient personalized orthotics processes. A reverse engineering (RE) process is in fact essential before RP, to digitize the 3D anatomy of interest and to process the obtained surface with suitable modeling software, in order to produce the virtual solid model of the orthosis to be printed. In this paper, we focus on the specific and demanding case of the customized production of hand orthosis. We design and test the essential steps of the entire production process with particular emphasis on the accurate acquisition of the forearm geometry and on the subsequent production of a printable model of the orthosis. The choice of the various hardware and software tools (3D scanner, modeling software, and FDM printer) is aimed at the mitigation of the design and production costs while guaranteeing suitable levels of data accuracy, process efficiency, and design versatility. Eventually, the proposed method is critically analyzed so that the residual issues and critical aspects are highlighted in order to discuss possible alternative approaches and to derive insightful observations that could guide future research activities.
This work describes the current state-of-the-art in scalable video coding (SVC), focusing on wavelet based motion-compensated approaches. After recalling the requirements imposed by multiple forms of video scalability (quality, picture size, frame rate) which typically exist jointly, it discusses individual components that have been designed to address the problem over the years. Therefore presentation shows how such components are typically combined to achieve meaningful architectures for video compression, which differ from the space-time order in which the wavelet transform operates, discussing strengths and weaknesses of the resulting implementations. The paper explains the Wavelet Video Reference architecture(s) studied by ISO/MPEG in its exploration on Wavelet Video Compression. It also attempts to draw a list of major differences between wavelet based solutions and the emerging SVC standard, jointly targeted by ITU and ISO/MPEG (JVT-SVC), based on MPEG-4 AVC technologies. A major emphasis is devoted to a WSVC solution, named STP-tool, which presents architectural similarities with respect to JVT-SVC. The presentation continues by providing performance comparisons between the different approaches, and draws some indications on the future trends being researched by the community to further improve current wavelet video codecs. Insights on application scenarios which could benefit from a wavelet based approach are provided.
Many functional and structural neuroimaging studies call for accurate morphometric segmentation of different brain structures starting from image intensity values of MRI scans. Current automatic (multi-) atlas-based segmentation strategies often lack accuracy on difficult-to-segment brain structures and, since these methods rely on atlas-to-scan alignment, they may take long processing times. Alternatively, recent methods deploying solutions based on Convolutional Neural Networks (CNNs) are enabling the direct analysis of out-of-the-scanner data. However, current CNN-based solutions partition the test volume into 2D or 3D patches, which are processed independently. This process entails a loss of global contextual information, thereby negatively impacting the segmentation accuracy. In this work, we design and test an optimised end-to-end CNN architecture that makes the exploitation of global spatial information computationally tractable, allowing to process a whole MRI volume at once. We adopt a weakly supervised learning strategy by exploiting a large dataset composed of 947 out-of-the-scanner (3 Tesla T1-weighted 1mm isotropic MP-RAGE 3D sequences) MR Images. The resulting model is able to produce accurate multi-structure segmentation results in only a few seconds. Different quantitative measures demonstrate an improved accuracy of our solution when compared to state-of-the-art techniques. Moreover, through a randomised survey involving expert neuroscientists, we show that subjective judgements favour our solution with respect to widely adopted atlas-based software.
Working with noisy meshes and aiming at providing high-fidelity 3D object models without tampering the metric quality of the acquisitions, we propose a mesh denoising technique that, through a normal-diffusion process guided by a curvature saliency map, is able to preserve and emphasize the natural object features, concurrently allowing the introduction of a bound on the maximum distance from the original model. Moreover, both the position of the mesh vertices and the edge orientations are optimized through a tailored geometric-aliasing correction. Thanks to an efficiently parallelized procedure, we are able to process even large models almost instantly with a parameter configuration that does not depend on the scale of the object. An essential survey on mesh denoising is also presented which is functional to the definition of a common framework where to set up our solutions and the related technical and experimental comparisons. The proposed results prove the effectiveness of our method, especially on the challenging target application profiles. Where competing techniques tend to inappropriately recover sharp edges while deforming the surrounding geometry or, on the contrary, to oversmooth shallow features, our method protects and enhances the natural object features and effectively reduces scanning noise on the smooth parts, while guaranteeing the prescribed metric-fidelity to the input model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.