We propose a nonlinear multiscale decomposition of signals defined on the vertex set of a general weighted graph. This decomposition is inspired by the hierarchical multiscale (BV , L 2 ) decomposition of Tadmor, Nezzar, and Vese (Multiscale Model. Simul. 2(4): 2004). We find the decomposition by iterative regularization using a graph variant of the classical total variation regularization (Rudin et al, Physica D 60(1-4):259-268, 1992). Using tools from convex analysis, and in particular Moreau's identity, we carry out the mathematical study of the proposed method, proving the convergence of the representation and providing an energy decomposition result. The choice of the sequence of scales is also addressed. Our study shows that the initial scale can be related to a discrete version of Meyer's norm (Meyer, Oscillating Patterns in Image Processing and Nonlinear Evolution Equations, 2001) which we introduce in the present paper. We propose to use the recent primal-dual algorithm of Chambolle and Pock (J. Math. Imaging Vis. 40:120-145, 2011) in order to compute both the minimizer of the graph total variation and the corresponding dual norm. By applying the graph model to digital images, we investigate the use of nonlocal methods to the multiscale decomposition task. Since the only assumption needed to apply our method is that the input data is living on a graph, we are also able to tackle the task of adaptive multi-
The classical super-resolution (SR) setting starts with a set of low-resolution (LR) images related by subpixel shifts and tries to reconstruct a single high-resolution (HR) image. In some cases, partial observations about the HR image are also available. Trying to complete the missing HR data without any reference to LR ones is an inpainting (or completion) problem. In this paper, we consider the problem of recovering a single HR image from a pair consisting of a complete LR and incomplete HR image pair. This setting arises in particular when one wants to fuse image data captured at two different resolutions. We propose an efficient algorithm that allows to take advantage of both image data by first learning nonlocal interactions from an interpolated version of the LR image using patches. Those interactions are then used by a convex energy function whose minimization yields a superresolved complete image.
We consider the problem of recovering a high-resolution image from a pair consisting of a complete low-resolution image and a high-resolution but incomplete one. We refer to this task as the image zoom completion problem. After discussing possible contexts in which this setting may arise, we introduce a nonlocal regularization strategy, giving full details concerning the numerical optimization of the corresponding energy and discussing its benefits and shortcomings. We also derive two total variation-based algorithms and evaluate the performance of the proposed methods on a set of natural and textured images. We compare the results and get with those obtained with two recent state-of-the-art single-image super-resolution algorithms.
Abstract-In this paper, we propose a nonlocal approach based on graphs to segment raw point clouds as a particular class of graph signals. Using the framework of Partial difference Equations (PdEs), we propose a transcription on graphs of recent continuous global active contours along with a minimization algorithm. To apply it on point clouds, we show how to represent a point cloud as a graph weighted with patches. Experiments show the benefits of the approach on raw colored point clouds obtained from real scans 1 .
Abstract. The decomposition of images into their meaningful components is one of the major tasks in computer vision. Tadmor, Nezzar and Vese [1] have proposed a general approach for multiscale hierarchical decomposition of images. On the basis of this work, we propose a multiscale hierarchical decomposition of functions on graphs. The decomposition is based on a discrete variational framework that makes it possible to process arbitrary discrete data sets with the natural introduction of nonlocal interactions. This leads to an approach that can be used for the decomposition of images, meshes, or arbitrary data sets by taking advantage of the graph structure. To have a fully automatic decomposition, the issue of parameter selection is fully addressed. We illustrate our approach with numerous decomposition results on images, meshes, and point clouds and show the benefits.
Abstract-We propose a new multiscale transform for scalar functions defined on the vertex set of a general undirected weighted graph. The transform is based on an adaption of the lifting scheme to graphs. One of the difficulties in applying directly the lifting scheme to graphs is the partitioning of the vertex set. We follow a recent greedy approach and extend it to a multilevel transform. We carefully examine each step of the algorithm, in particular its effect on the underlying basis. We finally investigate the use of the proposed transform to image representation by computing M-term nonlinear approximation errors. We provide a comparison with standard orthogonal and biorthogonal wavelet transforms.
International audienceIn this paper we introduce a new unified framework for multi- scale detail manipulation of graph signals. The key to this unification is to model any kind of data as signals defined on appropriate weighted graphs. Graph signals are represented as the sum of successive layers, each capturing a given scale of detail. Detail layers are obtained through a series of reg- ularization procedures based on total variation penalization over graphs. Layers are then processed separately before be- ing recombined, thus achieving detail manipulation. The ben- efit of the approach is shown on images, 3D meshes and 3D colored point clouds
Fully convolutional networks (FCNs) are well known to provide state-of-the-art results in various medical image segmentation tasks. However, these models usually need a tremendous number of training samples to achieve good performances. Unfortunately, this requirement is often difficult to satisfy in the medical imaging field, due to the scarcity of labeled images. As a consequence, the common tricks for FCNs' training go from data augmentation and transfer learning to patch-based segmentation. In the latter, the segmentation of an image involves patch extraction, patch segmentation, then patch aggregation. This paper presents a framework that takes advantage of all these tricks by starting with a patch-level segmentation which is then extended to the image level by transfer learning. The proposed framework follows two main steps. Given a image database D, a first network NP is designed and trained using patches extracted from D. Then, NP is used to pre-train a FCN NI to be trained on the full sized images of D. Experimental results are presented on the task of retinal blood vessel segmentation using the well known publicly available DRIVE database.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.