In an era when big data are becoming the norm, there is less concern with the quantity but more with the quality and completeness of the data. In many disciplines, data are collected from heterogeneous sources, resulting in multi-view or multi-modal datasets. The missing data problem has been challenging to address in multi-view data analysis. Especially, when certain samples miss an entire view of data, it creates the missing view problem. Classic multiple imputations or matrix completion methods are hardly effective here when no information can be based on in the specific view to impute data for such samples. The commonly-used simple method of removing samples with a missing view can dramatically reduce sample size, thus diminishing the statistical power of a subsequent analysis. In this paper, we propose a novel approach for view imputation via generative adversarial networks (GANs), which we name by VIGAN. This approach first treats each view as a separate domain and identifies domain-to-domain mappings via a GAN using randomly-sampled data from each view, and then employs a multi-modal denoising autoencoder (DAE) to reconstruct the missing view from the GAN outputs based on paired data across the views. Then, by optimizing the GAN and DAE jointly, our model enables the knowledge integration for domain mappings and view correspondences to effectively recover the missing view. Empirical results on benchmark datasets validate the VIGAN approach by comparing against the state of the art. The evaluation of VIGAN in a genetic study of substance use disorders further proves the effectiveness and usability of this approach in life science.
Let L= div (A(x)∇) be a uniformly elliptic operator in divergence form in a bounded open subset Ω of Rn. We study the effect of the operator scriptL on the existence and nonexistence of positive solutions of the nonlocal Brezis–Nirenberg problem
{0truefalse(−scriptLfalse)su=un+2sn−2s+λuinΩ,u=0on∂Ωwhere false(−scriptLfalse)s denotes the fractional power of −L with zero Dirichlet boundary values on ∂Ω, 02s and λ is a real parameter. By assuming Afalse(xfalse)≥Afalse(x0false) for all x∈normalΩ¯ and Afalse(xfalse)≤Afalse(x0false)+|x−x0|σIn near some point x0∈normalΩ¯, we prove existence theorems for any λ∈(0,λ1,sfalse(−scriptLfalse)), where λ1,sfalse(−scriptLfalse) denotes the first Dirichlet eigenvalue of false(−scriptLfalse)s. Our existence result holds true for σ>2s and n≥4s in the interior case (x0∈Ω) and for σ>2s(n−2s)n−4s and n>4s in the boundary case (x0∈∂Ω). Nonexistence for star‐shaped domains is obtained for any λ≤0.
<abstract><p>We study layered solutions in a one-dimensional version of the scalar Ginzburg-Landau equation that involves a mixture of a second spatial derivative and a fractional half-derivative, together with a periodically modulated nonlinearity. This equation appears as the Euler-Lagrange equation of a suitably renormalized fractional Ginzburg-Landau energy with a double-well potential that is multiplied by a 1-periodically varying nonnegative factor $ g(x) $ with $ \int_0^1 \frac{1}{g(x)} dx < \infty. $ A priori this energy is not bounded below due to the presence of a nonlocal term in the energy. Nevertheless, through a careful analysis of a minimizing sequence we prove existence of global energy minimizers that connect the two wells at infinity. These minimizers are shown to be the classical solutions of the associated nonlocal Ginzburg-Landau type equation.</p></abstract>
We propose a tensor-based approach to analyze multi-dimensional data describing sample subjects. It simultaneously discovers patterns in features and reveals past temporal points that have impact on current outcomes. The model coefficient, a k-mode tensor, is decomposed into a summation of k tensors of the same dimension. To accomplish feature selection, we introduce the tensor '"atent LF,1 norm" as a grouped penalty in our formulation. Furthermore, the proposed model takes into account within-subject correlations by developing a tensor-based quadratic inference function. We provide an asymptotic analysis of our model when the sample size approaches to infinity. To solve the corresponding optimization problem, we develop a linearized block coordinate descent algorithm and prove its convergence for a fixed sample size. Computational results on synthetic datasets and real-file fMRI and EEG problems demonstrate the superior performance of the proposed approach over existing techniques.
We study the generalized point-vortex problem and the Gross-Pitaevskii equation on a surface of revolution. We find rotating periodic solutions to the generalized point-vortex problem, which have two two rings of n equally spaced vortices with degrees ±1. In particular we prove the existence of such solutions when the surface is longitudinally symmetric. Then we seek a rotating solution to the Gross-Pitaevskii equation having vortices that follow those of the point-vortex flow for ε sufficiently small.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.