Abstract:Recently emerging non-invasive imaging modality -optical coherence tomography (OCT) -is becoming an increasingly important diagnostic tool in various medical applications. One of its main limitations is the presence of speckle noise which obscures small and low-intensity features. The use of multiresolution techniques has been recently reported by several authors with promising results. These approaches take into account the signal and noise properties in different ways. Approaches that take into account the global orientation properties of OCT images apply accordingly different level of smoothing in different orientation subbands. Other approaches take into account local signal and noise covariance's.So far it was unclear how these different approaches compare to each other and to the best available single-resolution despeckling techniques. The clinical relevance of the denoising results also remains to be determined. In this paper we review systematically recent multiresolution OCT speckle filters and we report the results of a comparative experimental study.We use 15 different OCT images extracted from five different three-dimensional volumes, and we also generate a software phantom with real OCT noise. These test images are processed with different filters and the results are evaluated both visually and in terms of different performance measures. The results indicate significant differences in the performance of the analyzed methods. Wavelet techniques perform much better than the single resolution ones and some of the wavelet methods improve remarkably the quality of OCT images.
According to the World Health Organisation, 285 million peo ple live with a visual impairment. Despite the fact that many efforts have been made recently, there is still no computer guided system that is reliable, robust and practical enough to help these people to increase their mobility. Motivated by this shortcoming, we propose a novel obstacle detection sys tem to assist the visually impaired. This work mainly focuses on indoor environments and performs classification of typical obstacles that emerge in these situations, using a 3D sensor. A total of four classes of obstacles are considered: walls, doors, stairs and a residual class (which covers loose obstacles and bumpy parts on the floor). The proposed system is very re liable in terms of the detection accuracy. In a realistic ex periment, stairs are detected with 100% true positive rate and 8.6% false positive rate, while doors are detected with 86.4% true positive rate and 0% false positive rate.
Integrating video coding and denoising is a novel processing paradigm, bringing mutual benefits to both video processing tools. In this paper, we propose a novel video denoising approach of which the main idea is reusing motion estimation resources from the video coding module for video denoising. In most cases, the motion fields produced by real-time video codecs cannot be directly employed in video denoising, since they, as opposed to noise filters, tolerate errors in the motion field. In order to solve this problem, we propose a novel motion-field filtering step that refines the accuracy of the motion estimates to a degree that is required for denoising. Additionally, a novel temporal filter is proposed that is robust against errors in the estimated motion field. Numerical results demonstrate that the proposed denoising scheme is of low-complexity and compares favorably to the state-of-the-art video denoising methods
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.