Standard edge detectors react to all local luminance changes, irrespective of whether they are due to the contours of the objects represented in a scene or due to natural textures like grass, foliage, water, and so forth. Moreover, edges due to texture are often stronger than edges due to object contours. This implies that further processing is needed to discriminate object contours from texture edges. In this paper, we propose a biologically motivated multiresolution contour detection method using Bayesian denoising and a surround inhibition technique. Specifically, the proposed approach deploys computation of the gradient at different resolutions, followed by Bayesian denoising of the edge image. Then, a biologically motivated surround inhibition step is applied in order to suppress edges that are due to texture. We propose an improvement of the surround suppression used in previous works. Finally, a contour-oriented binarization algorithm is used, relying on the observation that object contours lead to long connected components rather than to short rods obtained from textures. Experimental results show that our contour detection method outperforms standard edge detectors as well as other methods that deploy inhibition. research activity has been focused mainly on information theory, signal theory, and signal and image processing and their applications to both telecommunications systems and remote sensing.
Abstract-Two important visual properties of paintings and painting-like images are the absence of texture details and the increased sharpness of edges as compared to photographic images. Painting-like artistic effects can be achieved from photographic images by filters that smooth out texture details, while preserving or enhancing edges and corners. However, not all edge preserving smoothers are suitable for this purpose. We present a simple nonlinear local operator that generalizes both the well known Kuwahara filter and the more general class of filters known in the literature as "criterion and value filter structure." This class of operators suffers from intrinsic theoretical limitations which give rise to a dramatic instability in presence of noise, especially on shadowed areas. Such limitations are discussed in the paper and overcome by the proposed operator. A large variety of experimental results shows that the output of the proposed operator is visually similar to a painting. Comparisons with existing techniques on a large set of natural images highlight conditions on which traditional edge preserving smoothers fail, whereas our approach produces good results. In particular, unlike many other well established approaches, the proposed operator is robust to degradations of the input image such as blurring and noise contamination.Index Terms-Adaptive filters, edge/corner enhancers, image region analysis, nonlinear filters, painterly image processing, smoothing methods.
Psychophysical and neurophysiological evidence about the human visual system shows the existence of a mechanism, called surround suppression, which inhibits the response of an edge in the presence of other similar edges in the surroundings. A simple computational model of this phenomenon has been previously proposed by us, by introducing an inhibition term that is supposed to be high on texture and low on isolated edges. While such an approach leads to better discrimination between object contours and texture edges w.r.t. methods based on the sole gradient magnitude, it has two drawbacks: first, a phenomenon called self-inhibition occurs, so that the inhibition term is quite high on isolated contours too; previous attempts to overcome self-inhibition result in slow and inelegant algorithms. Second, an input parameter called ''inhibition level'' needs to be introduced, whose value is left to heuristics. The contribution of this paper is twofold: on one hand, we propose a new model for the inhibition term, based on the theory of steerable filters, to reduce self-inhibition. On the other hand, we introduce a simple method to combine the binary edge maps obtained by different inhibition levels, so that the inhibition level is no longer specified by the user. The proposed approach is validated by a broad range of experimental results.
In this paper we propose a multiscale biologically motivated technique for contour detection by texture suppression. Standard edge detectors react to all the local luminance changes, irrespective whether they are due to the contours of the objects represented in the scene, rather than to natural texture like grass, foliage, water, etc. Moreover, edges due to texture are often stronger than edges due to true contours. This implies that further processing is needed to discriminate true contours from texture edges. In this contribution we exploit the fact that, in a multiresolution analysis, at coarser scales, only the edges due to object contours are present while texture edges disappear. This is used in combination with surround inhibition, a biologically motivated technique for texture suppression, in order to build a contour detector which is insensitive to texture. The experimental results show that our approach is also robust to additive noise.
Abstract-We consider the problem of detecting object contours in natural images. In many cases, local luminance changes turn out to be stronger in textured areas than on object contours. Therefore, local edge features, which only look at a small neighborhood of each pixel, cannot be reliable indicators of the presence of a contour, and some global analysis is needed. We introduce a new morphological operator, called adaptive pseudo-dilation (APD), which uses context dependent structuring elements in order to identify long curvilinear structure in the edge map. We show that grouping edge pixels as the connected components of the output of APD results in a good agreement with the gestalt law of good continuation. The novelty of this operator is that dilation is limited to the Voronoi cell of each edge pixel. An efficient implementation of APD is presented. The grouping algorithm is then embedded in a multithreshold contour detector. At each threshold level, small groups of edges are removed, and contours are completed by means of a generalized reconstruction from markers. The use of different thresholds makes the algorithm much less sensitive to the values of the input parameters. Both qualitative and quantitative comparison with existing approaches prove the superiority of the proposed contour detector in terms of larger amount of suppressed texture and more effective detection of low-contrast contours.
A fast implementation of bilateral filtering is presented, which is based on an optimal expansion of the filter kernel into a sum of factorized terms. These terms are computed by minimizing the expansion error in the mean-square-error sense. This leads to a simple and elegant solution in terms of eigenvectors of a square matrix. In this way, the bilateral filter is applied through computing a few Gaussian convolutions, for which very efficient algorithms are readily available. Moreover, the expansion functions are optimized for the histogram of the input image, leading to improved accuracy. It is shown that this further optimization it made possible by removing the commonly deployed constrain of shiftability of the basis functions. Experimental validation is carried out in the context of digital rock imaging. Results on large 3D images of rock samples show the superiority of the proposed method with respect to other fast approximations of bilateral filtering.
Abstract.We propose an algorithm that groups points similarly to how human observers do. It is simple, totally unsupervised and able to find clusters of complex and not necessarily convex shape. Groups are identified as the connected components of a Reduced Delaunay Graph (RDG) that we define in this paper. Our method can be seen as an algorithmic equivalent of the gestalt law of perceptual grouping according to proximity. We introduce a measure of dissimilarity between two different groupings of a point set and use this measure to compare our algorithm with human visual perception and the k-means clustering algorithm. Our algorithm mimics human perceptual grouping and outperforms the k-means algorithm in all cases that we studied. We also sketch a potential application in the segmentation of structural textures.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.