Abstract-Linear canonical transforms (LCTs) are a family of integral transforms with wide application in optical, acoustical, electromagnetic, and other wave propagation problems. The Fourier and fractional Fourier transforms are special cases of LCTs. We present the exact relation between continuous and discrete LCTs (which generalizes the corresponding relation for Fourier transforms), and also express it in terms of a new definition of the discrete LCT (DLCT), which is independent of the sampling interval. This provides the foundation for approximately computing the samples of the LCT of a continuous signal with the DLCT. The DLCT in this letter is analogous to the DFT and approximates the continuous LCT in the same sense that the DFT approximates the continuous Fourier transform. We also define the bicanonical width product which is a generalization of the time-bandwidth product.Index Terms-Bicanonical width product, fractional Fourier transform, linear canonical series, linear canonical transform.
Linear canonical transforms (LCTs) form a three-parameter family of integral transforms with wide application in optics. We show that LCT domains correspond to scaled fractional Fourier domains and thus to scaled oblique axes in the space-frequency plane. This allows LCT domains to be labeled and ordered by the corresponding fractional order parameter and provides insight into the evolution of light through an optical system modeled by LCTs. If a set of signals is highly confined to finite intervals in two arbitrary LCT domains, the space-frequency (phase space) support is a parallelogram. The number of degrees of freedom of this set of signals is given by the area of this parallelogram, which is equal to the bicanonical width product but usually smaller than the conventional space-bandwidth product. The bicanonical width product, which is a generalization of the space-bandwidth product, can provide a tighter measure of the actual number of degrees of freedom, and allows us to represent and process signals with fewer samples.
We show how to explicitly determine the space-frequency window (phase-space window) for optical systems consisting of an arbitrary sequence of lenses and apertures separated by arbitrary lengths of free space. If the space-frequency support of a signal lies completely within this window, the signal passes without information loss. When it does not, the parts that lie within the window pass and the parts that lie outside of the window are blocked, a result that is valid to a good degree of approximation for many systems of practical interest. Also, the maximum number of degrees of freedom that can pass through the system is given by the area of its spacefrequency window. These intuitive results provide insight and guidance into the behavior and design of systems involving multiple apertures and can help minimize information loss.
Classical phase retrieval problem is the recovery of a constrained image from the magnitude of its Fourier transform. Although there are several well-known phase retrieval algorithms including the hybrid input-output (HIO) method, the reconstruction performance is generally sensitive to initialization and measurement noise. Recently, deep neural networks (DNNs) have been shown to provide state-of-the-art performance in solving several inverse problems such as denoising, deconvolution, and superresolution. In this work, we develop a phase retrieval algorithm that utilizes two DNNs together with the model-based HIO method. First, a DNN is trained to remove the HIO artifacts, and is used iteratively with the HIO method to improve the reconstructions. After this iterative phase, a second DNN is trained to remove the remaining artifacts. Numerical results demonstrate the effectiveness of our approach, which has little additional computational cost compared to the HIO method. Our approach not only achieves state-of-the-art reconstruction performance but also is more robust to different initialization and noise levels.Keywords phase retrieval · deep learning · inverse problems · image reconstruction * Ç. Işıl is also with Artificial
Photon sieves, modifications of Fresnel zone plates, are a new class of diffractive image forming devices that open up new possibilities for high resolution imaging and spectroscopy, especially at UV and x-ray regime. In this paper, we develop a novel computational photon sieve imaging modality that enables high-resolution spectral imaging. For the spatially incoherent illumination, we study the problem of recovering the individual spectral images from the superimposed and blurred measurements of the proposed photon sieve system. This inverse problem, which can be viewed as a multi frame deconvolution problem involving multiple objects, is formulated as a maximum posterior estimation problem, and solved using a fixed-point algorithm. The performance of the proposed technique is illustrated for EUV spectral imaging through numerical simulations. The results suggest that higher spatial and spectral resolution can be achieved as compared to conventional spectral imagers.
Abstract:We study the degrees of freedom of optical systems and signals based on space-frequency (phase space) analysis. At the heart of this study is the relationship of the linear canonical transform domains to the space-frequency plane. Based on this relationship, we discuss how to explicitly quantify the degrees of freedom of first-order optical systems with multiple apertures, and give conditions for lossless transfer. Moreover, we focus on the degrees of freedom of signals in relation to the space-frequency support and provide a sub-Nyquist sampling approach to represent signals with arbitrary space-frequency support. Implications for simulating optical systems are also discussed.
Near-field multiple-input multiple-output (MIMO) radar imaging systems are of interest in diverse fields such as medicine, through-wall imaging, airport security, concealed weapon detection, and surveillance. The successful operation of these radar imaging systems highly depends on the quality of the images reconstructed from radar data.Since the underlying scenes can be typically represented sparsely in some transform domain, sparsity priors can effectively regularize the image formation problem and hence enable high-quality reconstructions. In this paper, we develop an efficient three-dimensional image reconstruction method that exploits sparsity in near-field MIMO radar imaging.Sparsity is enforced using total variation regularization, and the reflectivity distribution is reconstructed iteratively without requiring computation with huge matrices. The performance of the developed algorithm is illustrated through numerical simulations. The results demonstrate the effectiveness of the sparsity-based method compared to a classical image reconstruction method in terms of image quality.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.