.[1] We present a new approach to extracting spatially and temporally continuous ground deformation fields from interferometric synthetic aperture radar (InSAR) data. We focus on unwrapped interferograms from a single viewing geometry, estimating ground deformation along the line-of-sight. Our approach is based on a wavelet decomposition in space and a general parametrization in time. We refer to this approach as MInTS (Multiscale InSAR Time Series). The wavelet decomposition efficiently deals with commonly seen spatial covariances in repeat-pass InSAR measurements, since the coefficients of the wavelets are essentially spatially uncorrelated. Our time-dependent parametrization is capable of capturing both recognized and unrecognized processes, and is not arbitrarily tied to the times of the SAR acquisitions. We estimate deformation in the wavelet-domain, using a cross-validated, regularized least squares inversion. We include a model-resolution-based regularization, in order to more heavily damp the model during periods of sparse SAR acquisitions, compared to during times of dense acquisitions. To illustrate the application of MInTS, we consider a catalog of 92 ERS and Envisat interferograms, spanning 16 years, in the Long Valley caldera, CA, region. MInTS analysis captures the ground deformation with high spatial density over the Long Valley region.
[1] When targeting small amplitude surface deformation, using repeat orbit Interferometric Synthetic Aperture Radar (InSAR) observations can be plagued by propagation delays, some of which correlate with topographic variations. These topographically-correlated delays result from temporal variations in vertical stratification of the troposphere. An approximate model assuming a linear relationship between topography and interferometric phase has been used to correct observations with success in a few studies. Here, we present a robust approach to estimating the transfer function, K, between topography and phase that is relatively insensitive to confounding processes (earthquake deformation, phase ramps from orbital errors, tidal loading, etc.). Our approach takes advantage of a multiscale perspective by using a band-pass decomposition of both topography and observed phase. This decomposition into several spatial scales allows us to determine the bands wherein correlation between topography and phase is significant and stable. When possible, our approach also takes advantage of any inherent redundancy provided by multiple interferograms constructed with common scenes. We define a unique set of component time intervals for a given suite of interferometric pairs. We estimate an internally consistent transfer function for each component time interval, which can then be recombined to correct any arbitrary interferometric pair. We demonstrate our approach on a synthetic example and on data from two locations: Long Valley Caldera, California, which experienced prolonged periods of surface deformation from pressurization of a deep magma chamber, and one coseismic interferogram from the 2007 Mw 7.8 Tocapilla earthquake in northern Chile. In both examples, the corrected interferograms show improvements in regions of high relief, independent of whether or not we pre-correct the data for a source model. We believe that most of the remaining signals are predominately due to heterogeneous water vapor distribution that requires more sophisticated correction methods than those described here.
Since the beginning, Mathematical Morphology has proposed to extract shapes from images as connected components of level sets. These methods have proved very efficient in shape recognition and shape analysis. In this paper, we present an improved method to select the most meaningful level lines (boundaries of level sets) from an image. This extraction can be based on statistical arguments, leading to a parameter free algorithm. It permits to roughly extract all pieces of level lines of an image, that coincide with pieces of edges. By this method, the number of encoded level lines is reduced by a factor 100, without any loss of shape contents. In contrast to edge detections algorithm or snakes methods, such a level lines selection method delivers accurate shape elements, without user parameter: no smoothing involved and selection parameters can be computed by Helmholtz Principle.
S U M M A R YWe present a spherical wavelet-based multiscale approach for estimating a spatial velocity field on the sphere from a set of irregularly spaced geodetic displacement observations. Because the adopted spherical wavelets are analytically differentiable, spatial gradient tensor quantities such as dilatation rate, strain rate and rotation rate can be directly computed using the same coefficients. In a series of synthetic and real examples, we illustrate the benefit of the multiscale approach, in particular, the inherent ability of the method to localize a given deformation field in space and scale as well as to detect outliers in the set of observations. This approach has the added benefit of being able to locally match the smallest resolved process to the local spatial density of observations, thereby both maximizing the amount of derived information while also allowing the comparison of derived quantities at the same scale but in different regions. We also consider the vertical component of the velocity field in our synthetic and real examples, showing that in some cases the spatial gradients of the vertical velocity field may constitute a significant part of the deformation. This formulation may be easily applied either regionally or globally and is ideally suited as the spatial parametrization used in any automatic time-dependent geodetic transient detector.
Abstract-We describe a method that allows for accurate inflight calibration of the interior orientation of any pushbroom camera and that in particular solves the problem of modeling the distortions induced by charge coupled device (CCD) misalignments. The distortion induced on the ground by each CCD is measured using subpixel correlation between the orthorectified image to be calibrated and an orthorectified reference image that is assumed distortion free. Distortions are modeled as camera defects, which are assumed constant over time. Our results show that in-flight interior orientation calibration reduces internal camera biases by one order of magnitude. In particular, we fully characterize and model the Satellite Pour l'Observation de la Terre (SPOT) 4-HRV1 sensor, and we conjecture that distortions mostly result from the mechanical strain produced when the satellite was launched rather than from effects of on-orbit thermal variations or aging. The derived calibration models have been integrated to the software package Coregistration of Optically Sensed Images and Correlation (COSI-Corr), freely available from the Caltech Tectonics Observatory website. Such calibration models are particularly useful in reducing biases in digital elevation models (DEMs) generated from stereo matching and in improving the accuracy of change detection algorithms.
Deep neural networks trained using a softmax layer at the top and the cross-entropy loss are ubiquitous tools for image classification. Yet, this does not naturally enforce intra-class similarity nor inter-class margin of the learned deep representations. To simultaneously achieve these two goals, different solutions have been proposed in the literature, such as the pairwise or triplet losses. However, such solutions carry the extra task of selecting pairs or triplets, and the extra computational burden of computing and learning for many combinations of them. In this paper, we propose a plug-and-play loss term for deep networks that explicitly reduces intra-class variance and enforces inter-class margin simultaneously, in a simple and elegant geometric manner. For each class, the deep features are collapsed into a learned linear subspace, or union of them, and inter-class subspaces are pushed to be as orthogonal as possible. Our proposed Orthogonal Low-rank Embedding (OLÉ) does not require carefully crafting pairs or triplets of samples for training, and works standalone as a classification loss, being the first reported deep metric learning framework of its kind. Because of the improved margin between features of different classes, the resulting deep networks generalize better, are more discriminative, and more robust. We demonstrate improved classification performance in general object recognition, plugging the proposed loss term into existing off-the-shelf architectures. In particular, we show the advantage of the proposed loss in the small data/model scenario, and we significantly advance the state-of-the-art on the Stanford STL-10 benchmark.
Abstract. Since the seminal work of Mann and Picard in 1994, the standard way to build high dynamic range (hdr) images from regular cameras has been to combine a reduced number of photographs captured with different exposure times. The algorithms proposed in the literature differ in the strategy used to combine these frames. Several experimental studies comparing their performances have been reported, showing in particular that a maximum likelihood estimation yields the best results in terms of mean squared error. However, no theoretical study aiming at establishing the performance limits of the hdr estimation problem has been conducted. Another common aspect of all hdr estimation approaches is that they discard saturated values. In this paper, we address these two issues. More precisely, we derive theoretical bounds for the hdr estimation problem, and we show that, even with a small number of photographs, the maximum likelihood estimator performs extremely close to these bounds. As a second contribution, we propose a general strategy to integrate the information provided by saturated pixels in the estimation process, hence improving the estimation results. Finally, we analyze the sensitivity of the hdr estimation process to camera parameters, and we show that small errors in the camera calibration process may severely degrade the estimation results.Key words. high dynamic range imaging, irradiance estimation, exposure bracketing, multi-exposure fusion, camera acquisition model, noise modeling, censored data, exposure saturation, Cramér-Rao lower bound.1. Introduction. The human eye has the ability to capture scenes of very high dynamic range, retaining details in both dark and bright regions. This is not the case for current standard digital cameras. Indeed, the limited capacity of the sensor cells makes it impossible to record the irradiance from very bright regions for long exposures. Pixels saturate incurring in information loss under the form of censored data. On the other hand, if the exposure time is reduced in order to avoid saturation, very few photons will be captured in the dark regions and the result will be masked by the acquisition noise. Therefore the result of a single shot picture of a high dynamic range scene, taken with a regular digital camera, contains pixels which are either overexposed or too noisy.High dynamic range imaging (hdr for short) is the field of imaging that seeks to accurately capture and represent scenes with the largest possible irradiance range. The representation problem of how to display an hdr image or irradiance map in a lower range image (for computer monitors or photographic prints) while retaining localized contrast, known as tone mapping, will not be addressed here. Due to technological and physical limitations of current optical sensors, nowadays the most common way to reach high irradiance dynamic ranges is by combining multiple low dynamic range photographs, acquired with different exposure times τ 1 , τ 2 , . . . , τ T . Indeed, for a given irradiance C and expos...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.