Abstract:In this paper, to break the limit of the traditional linear models for synthetic aperture radar (SAR) image despeckling, we propose a novel deep learning approach by learning a non-linear end-to-end mapping between the noisy and clean SAR images with a dilated residual network (SAR-DRN). SAR-DRN is based on dilated convolutions, which can both enlarge the receptive field and maintain the filter size and layer depth with a lightweight structure. In addition, skip connections and a residual learning strategy are added to the despeckling model to maintain the image details and reduce the vanishing gradient problem. Compared with the traditional despeckling methods, the proposed method shows a superior performance over the state-of-the-art methods in both quantitative and visual assessments, especially for strong speckle noise.
We investigate the statistical properties of the kinetic $\unicode[STIX]{x1D700}_{u}$ and thermal $\unicode[STIX]{x1D700}_{\unicode[STIX]{x1D703}}$ energy dissipation rates in two-dimensional (2-D) turbulent Rayleigh–Bénard (RB) convection. Direct numerical simulations were carried out in a box with unit aspect ratio in the Rayleigh number range $10^{6}\leqslant Ra\leqslant 10^{10}$ for Prandtl numbers $Pr=0.7$ and 5.3. The probability density functions (PDFs) of both dissipation rates are found to deviate significantly from a log-normal distribution. The PDF tails can be well described by a stretched exponential function, and become broader for higher Rayleigh number and lower Prandtl number, indicating an increasing degree of small-scale intermittency with increasing Reynolds number. Our results show that the ensemble averages $\langle \unicode[STIX]{x1D700}_{u}\rangle _{V,t}$ and $\langle \unicode[STIX]{x1D700}_{\unicode[STIX]{x1D703}}\rangle _{V,t}$ scale as $Ra^{-0.18\sim -0.20}$, which is in excellent agreement with the scaling estimated from the two global exact relations for the dissipation rates. By separating the bulk and boundary-layer contributions to the total dissipations, our results further reveal that $\langle \unicode[STIX]{x1D700}_{u}\rangle _{V,t}$ and $\langle \unicode[STIX]{x1D700}_{\unicode[STIX]{x1D703}}\rangle _{V,t}$ are both dominated by the boundary layers, corresponding to regimes $I_{l}$ and $I_{u}$ in the Grossmann–Lohse (GL) theory (J. Fluid Mech., vol. 407, 2000, pp. 27–56). To include the effects of thermal plumes, the plume–background partition is also considered and $\langle \unicode[STIX]{x1D700}_{\unicode[STIX]{x1D703}}\rangle _{V,t}$ is found to be plume dominated. Moreover, the boundary-layer/plume contributions scale as those predicted by the GL theory, while the deviations from the GL predictions are observed for the bulk/background contributions. The possible reasons for the deviations are discussed.
Graph representation learning has been extensively studied in recent years, in which sampling is a critical point. Prior arts usually focus on sampling positive node pairs, while the strategy for negative sampling is left insufficiently explored. To bridge the gap, we systematically analyze the role of negative sampling from the perspectives of both objective and risk, theoretically demonstrating that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance. To the best of our knowledge, we are the first to derive the theory and quantify that a nice negative sampling distribution is p n (u|v) ∝ p d (u|v) α , 0 < α < 1. With the guidance of the theory, we propose MCNS, approximating the positive distribution with self-contrast approximation and accelerating negative sampling by Metropolis-Hastings. We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and recommendation, on a total of 19 experimental settings. These relatively comprehensive experimental results demonstrate its robustness and superiorities.
CCS CONCEPTS• Mathematics of computing → Graph algorithms; • Computing methodologies → Learning latent representations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.