Phase unwrapping is a critical step in synthetic aperture radar interferometry (InSAR) data processing chains. In almost all phase unwrapping methods, estimating the phase gradient according to the phase continuity assumption (PGE-PCA) is an essential step. The phase continuity assumption is not always satisfied due to the presence of noise and abrupt terrain changes; therefore, it is difficult to get the correct phase gradient. In this paper, we propose a robust least squares phase unwrapping method that works via a phase gradient estimation network based on the encoder–decoder architecture (PGENet) for InSAR. In this method, from a large number of wrapped phase images with topography features and different levels of noise, the deep convolutional neural network can learn global phase features and the phase gradient between adjacent pixels, so a more accurate and robust phase gradient can be predicted than that obtained by PGE-PCA. To get the phase unwrapping result, we use the traditional least squares solver to minimize the difference between the gradient obtained by PGENet and the gradient of the unwrapped phase. Experiments on simulated and real InSAR data demonstrated that the proposed method outperforms the other five well-established phase unwrapping methods and is robust to noise.
Phase filtering is a key issue in interferometric synthetic aperture radar (InSAR) applications, such as deformation monitoring and topographic mapping. The accuracy of the deformation and terrain height is highly dependent on the quality of phase filtering. Researchers are committed to continuously improving the accuracy and efficiency of phase filtering. Inspired by the successful application of neural networks in SAR image denoising, in this paper we propose a phase filtering method that is based on deep learning to efficiently filter out the noise in the interferometric phase. In this method, the real and imaginary parts of the interferometric phase are filtered while using a scale recurrent network, which includes three single scale subnetworks based on the encoder-decoder architecture. The network can utilize the global structural phase information contained in the different-scaled feature maps, because RNN units are used to connect the three different-scaled subnetworks and transmit current state information among different subnetworks. The encoder part is used for extracting the phase features, and the decoder part restores detailed information from the encoded feature maps and makes the size of the output image the same as that of the input image. Experiments on simulated and real InSAR data prove that the proposed method is superior to three widely-used phase filtering methods by qualitative and quantitative comparisons. In addition, on the same simulated data set, the overall performance of the proposed method is better than another deep learning-based method (DeepInSAR). The runtime of the proposed method is only about 0.043s for an image with a size of 1024×1024 pixels, which has the significant advantage of computational efficiency in practical applications that require real-time processing.
Because of the three-dimensional (3D) imaging scene’s sparsity, compressed sensing (CS) algorithms can be used for linear array synthetic aperture radar (LASAR) 3D sparse imaging. CS algorithms usually achieve high-quality sparse imaging at the expense of computational efficiency. To solve this problem, a fast Bayesian compressed sensing algorithm via relevance vector machine (FBCS–RVM) is proposed in this paper. The proposed method calculates the maximum marginal likelihood function under the framework of the RVM to obtain the optimal hyper-parameters; the scattering units corresponding to the non-zero optimal hyper-parameters are extracted as the target-areas in the imaging scene. Then, based on the target-areas, we simplify the measurement matrix and conduct sparse imaging. In addition, under low signal to noise ratio (SNR), low sampling rate, or high sparsity, the target-areas cannot always be extracted accurately, which probably contain several elements whose scattering coefficients are too small and closer to 0 compared to other elements. Those elements probably make the diagonal matrix singular and irreversible; the scattering coefficients cannot be estimated correctly. To solve this problem, the inverse matrix of the singular matrix is replaced with the generalized inverse matrix obtained by the truncated singular value decomposition (TSVD) algorithm to estimate the scattering coefficients correctly. Based on the rank of the singular matrix, those elements with small scattering coefficients are extracted and eliminated to obtain more accurate target-areas. Both simulation and experimental results show that the proposed method can improve the computational efficiency and imaging quality of LASAR 3D imaging compared with the state-of-the-art CS-based methods.
Multichannel signal processing in azimuth is a vital technique to enable a wide-swath Synthetic Aperture Radar (SAR) with high azimuth resolution. However, the multichannel high-resolution and wide-swath (HRWS) SAR system always suffers from the problem of the azimuth nonuniform sampling resulting in the image ambiguity, when it does not satisfy the uniform sampling condition. In this paper, to suppress the azimuth image ambiguity, we propose a novel unambiguous reconstruction method based on image fusion. During this reconstruction processing, the Back Projection (BP) algorithm is first utilized for SAR imaging to obtain the designed sub-images. Then, the reconstruction expression is derived as the summation of the sub-images weighted by the interpolation coefficient. This method integrates the reconstruction into the imaging process and the image fusion makes the procedure simple. In addition, the interpolation period, which affects the reconstruction image quality and efficiency, is further analyzed. Moreover, as the curved trajectory platform brings more challenges for the unambiguous reconstruction, the performance of the proposed method applied to the curved trajectory platform is studied. Finally, experimental results clearly verify the effectiveness of the proposed method for ambiguity suppression and demonstrate its applicability to the curved trajectory.INDEX TERMS High resolution and wide swath, nonuniform sampling, signal reconstruction, synthetic aperture radar (SAR).
Background Endplate morphology is considered to be one of the influencing factors of cage subsidence after lumbar interbody fusion (LIF). Previous radiographic evaluations on the endplate mostly used sagittal X-ray or MRI. However, there are few studies on the CT evaluation of the endplate and intervertebral space (IVS), especially the evaluation of coronal morphology and its influence on subsidence and fusion after LIF. We aimed to measure and classify the shapes of the endplate and IVS using coronal CT imaging and evaluate the radiographic and clinical outcomes of different shapes of the endplate/IVS following oblique lateral lumbar interbody fusion (OLIF). Methods A total of 137 patients (average age 59.1 years, including 75 males and 62 females) who underwent L4-5 OLIF combined with anterolateral fixation from June 2018 to June 2020 were included. The endplate concavity depth (ECD) was measured on the preoperative coronal CT image. According to ECD, the endplate was classified as flat (< 2 mm), shallow (2–4 mm), or deep (> 4 mm). The L4-5 IVS was further classified according to endplate type. The disc height (DH), DH changes, subsidence rate, fusion rate, and Oswestry Disability Index (ODI) in different endplate/IVS shapes were evaluated during 1-year follow up. Results The ECD of L4 inferior endplate (IEP) was significantly deeper than that of L5 superior endplate (SEP) (4.2 ± 1.1 vs 1.6 ± 0.8, P < 0.01). Four types of L4-5 IVS were identified: shallow-shallow (16, 11.7%), shallow-flat (45, 32.9%), deep-shallow (32, 23.4%), and deep-flat (44, 32.1%). A total of 45 (32.9%) cases of cage subsidence were observed. Only one (6.3%) subsidence event occurred in the shallow-shallow group, which was significantly lower than in the other three groups (19 shallow-flat, 6 deep-shallow, and 19 deep-flat) (P < 0.05). Meanwhile, the shallow-shallow group had the highest fusion rate (15, 93.8%) and the highest rate of reach minimal clinically important difference (MCID) ODI among the four types. For a single endplate, the shape of L4 IEP is the main influencing factor of the final interbody fusion rate, and the shallow shape L4 IEP facilitates fusion ( OR = 2.85, p = 0.03). On the other hand, the flat shape L5 SEP was the main risk factor to cage subsidence (OR = 4.36, p < 0.01). Conclusion The L4-5 IVS is asymmetrical on coronal CT view and tends to be fornix-above and flat-down. The shallow-shallow IVS has the lowest subsidence rate and best fusion result, which is possibly because it has a relatively good degree in matching either the upper or lower interface of the cage and endplates. These findings provide a basis for the further improvements in the design of OLIF cages.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.