No abstract
One approach to ease the construction of frames is to first construct local components and then build a global frame from these. In this paper we will show that the study of the relation between a frame and its local components leads to the definition of a frame of subspaces. We introduce this new notion and prove that it provides us with the link we need. It will also turn out that frames of subspaces behave as a generalization of frames. In particular, we can define an analysis, a synthesis and a frame operator for a frame of subspaces, which even yield a reconstruction formula. Also concepts such as completeness, minimality, and exactness are introduced and investigated. We further study several constructions of frames of subspaces, and also of frames and Riesz frames using the theory of frames of subspaces. An important special case are harmonic frames of subspaces which generalize harmonic frames. We show that wavelet subspaces coming from multiresolution analysis belong to this class.1991 Mathematics Subject Classification. Primary 42C15; Secondary 46C99. Key words and phrases. Abstract frame theory, frame, harmonic frame, Hilbert space, resolution of the identity, Riesz basis, Riesz frame.
Abstract. It is known that the Continuous Wavelet Transform of a distribution f decays rapidly near the points where f is smooth, while it decays slowly near the irregular points. This property allows the identification of the singular support of f . However, the Continuous Wavelet Transform is unable to describe the geometry of the set of singularities of f and, in particular, identify the wavefront set of a distribution. In this paper, we employ the same framework of affine systems which is at the core of the construction of the wavelet transform to introduce the Continuous Shearlet Transform. This is defined by SH ψ f (a, s, t) = f ψ ast , where the analyzing elements ψ ast are dilated and translated copies of a single generating function ψ. The dilation matrices form a two-parameter matrix group consisting of products of parabolic scaling and shear matrices. We show that the elements {ψ ast } form a system of smooth functions at continuous scales a > 0, locations t ∈ R 2 , and oriented along lines of slope s ∈ R in the frequency domain. We then prove that the Continuous Shearlet Transform does exactly resolve the wavefront set of a distribution f .
In this paper we describe a new class of multidimensional representation systems, called shearlets. They are obtained by applying the actions of dilation, shear transformation and translation to a fixed function, and exhibit the geometric and mathematical properties, e.g., directionality, elongated shapes, scales, oscillations, recently advocated by many authors for sparse image processing applications. These systems can be studied within the framework of a generalized multiresolution analysis. This approach leads to a recursive algorithm for the implementation of these systems, that generalizes the classical cascade algorithm.
Let {W i } i∈I be a (redundant) sequence of subspaces of a Hilbert space each being endowed with a weight v i , and let H be the closed linear span of the W i s, a composite Hilbert space. {(W i , v i )} i∈I is called a fusion frame provided it satisfies a certain property which controls the weighted overlaps of the subspaces. These systems contain conventional frames as a special case, however they reach far "beyond frame theory." In case each subspace W i is equipped with a spanning frame system {f ij } j ∈J i , we refer to {(W i , v i , {f ij } j ∈J i )} i∈I as a fusion frame system. The focus of this article is on computational issues of fusion frame reconstructions, unique properties of fusion frames important for applications with particular focus on those superior to conventional frames, and on centralized reconstruction versus distributed reconstructions and their numerical differences. The weighted and distributed processing technique described in this article is not only a natural fit to distributed processing systems such as sensor networks, but also an efficient scheme for parallel processing of very large frame systems. Another important component of this article is an extensive study of the robustness of fusion frame systems.
Image data are often composed of two or more geometrically distinct constituents; in galaxy catalogs, for instance, one sees a mixture of pointlike structures (galaxy superclusters) and curvelike structures (filaments). It would be ideal to process a single image and extract two geometrically 'pure' images, each one containing features from only one of the two geometric constituents. This seems to be a seriously underdetermined problem, but recent empirical work achieved highly persuasive separations.We present a theoretical analysis showing that accurate geometric separation of point and curve singularities can be achieved by minimizing the ℓ 1 norm of the representing coefficients in two geometrically complementary frames: wavelets and curvelets. Driving our analysis is a specific property of the ideal (but unachievable) representation where each content type is expanded in the frame best adapted to it. This ideal representation has the property that important coefficients are clustered geometrically in phase space, and that at fine scales, there is very little coherence between a cluster of elements in one frame expansion and individual elements in the complementary frame. We formally introduce notions of cluster coherence and clustered sparsity and use this machinery to show that the underdetermined systems of linear equations can be stably solved by ℓ 1 minimization; microlocal phase space helps organize the calculations that cluster coherence requires.
We derive fundamental lower bounds on the connectivity and the memory requirements of deep neural networks guaranteeing uniform approximation rates for arbitrary function classes in L 2 (R d ). In other words, we establish a connection between the complexity of a function class and the complexity of deep neural networks approximating functions from this class to within a prescribed accuracy. Additionally, we prove that our lower bounds are achievable for a broad family of function classes. Specifically, all function classes that are optimally approximated by a general class of representation systems-so-called affine systems-can be approximated by deep neural networks with minimal connectivity and memory requirements. Affine systems encompass a wealth of representation systems from applied harmonic analysis such as wavelets, ridgelets, curvelets, shearlets, α-shearlets, and more generally α-molecules. Our central result elucidates a remarkable universality property of neural networks and shows that they achieve the optimum approximation properties of all affine systems combined. As a specific example, we consider the class of α −1 -cartoon-like functions, which is approximated optimally by α-shearlets. We also explain how our results can be extended to the case of functions on low-dimensional immersed manifolds. Finally, we present numerical experiments demonstrating that the standard stochastic gradient descent algorithm generates deep neural networks providing close-to-optimal approximation rates. Moreover, these results indicate that stochastic gradient descent can actually learn approximations that are sparse in the representation systems optimally sparsifying the function class the network is trained on.Throughout the paper, we consider the case Φ : R d → R, i.e., N L = 1, which includes situations such as the classification and temperature prediction problem described above. We emphasize, however, that the general results of Sections 3, 4, and 5 are readily generalized to N L > 1.We denote the class of networks Φ : R d → R with exactly L layers, connectivity no more than M , and activation function ρ by NN L,M,d,ρ with the understanding that for L = 1, the set NN L,M,d,ρ is empty. Moreover, we let NN ∞,M,d,ρ := L∈N NN L,M,d,ρ , NN L,∞,d,ρ := M ∈N NN L,M,d,ρ , NN ∞,∞,d,ρ := L∈N NN L,∞,d,ρ .Now, given a function f : R d → R, we are interested in the theoretically best possible approximation of f by a network Φ ∈ NN ∞,M,d,ρ . Specifically, we will want to know how the approximation quality depends on the connectivity M and what the associated number of bits needed to store the network topology 7 i=1 c i f (· − d i ) is compactly supported, has 7 vanishing moments in x 1 -direction, andĝ(ξ) = 0 for all ξ ∈ [−3, 3] 2 such that ξ 1 = 0. Then, by Theorem 6.4 and Remark 6.7 there exists δ > 0 such that SH α (f, g, δ; Ω) is optimal for E 1/α (Ω; ν). We definewhere we order (A j ) j∈N such that |det(A j )| ≤ |det(A j+1 )|, for all j ∈ N. This construction implies that the α-shearlet system SH α (f, g, δ; Ω) is an affi...
Cartoon-like images, i.e., C 2 functions which are smooth apart from a C 2 discontinuity curve, have by now become a standard model for measuring sparse (non-linear) approximation properties of directional representation systems. It was already shown that curvelets, contourlets, as well as shearlets do exhibit (almost) optimally sparse approximations within this model. However, all those results are only applicable to band-limited generators, whereas, in particular, spatially compactly supported generators are of uttermost importance for applications.In this paper, we now present the first complete proof of (almost) optimally sparse approximations of cartoon-like images by using a particular class of directional representation systems, which indeed consists of compactly supported elements. This class will be chosen as a subset of shearlet frames -not necessarily required to be tight -with shearlet generators having compact support and satisfying some weak moment conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.