A central topic in functional data analysis is how to design an optimal decision rule, based on training samples, to classify a data function. We exploit the optimal classification problem when data functions are Gaussian processes. Sharp convergence rates for minimax excess misclassification risk are derived in both settings that data functions are fully observed and discretely observed. We explore two easily implementable classifiers based on discriminant analysis and deep neural network, respectively, which are both proven to achieve optimality in Gaussian settings. Our deep neural network classifier is new in literature which demonstrates outstanding performance even when data functions are non-Gaussian. In case of discretely observed data, we discover a novel critical sampling frequency that governs the sharp convergence rates. The proposed classifiers perform favorably in finite-sample applications, as we demonstrate through comparisons with other functional classifiers in simulations and one real data application.
In this work, we propose a deep neural networks-based method to perform nonparametric regression for functional data. The proposed estimators are based on sparsely connected deep neural networks with rectifier linear unit (ReLU) activation function. We provide the convergence rate of the proposed deep neural networks estimator in terms of the empirical norm. Through Monte Carlo simulation studies, we examine the finite sample performance of the proposed method. Finally, the proposed method is applied to analyse positron emission tomography images of patients with Alzheimer's disease obtained from the Alzheimer Disease Neuroimaging Initiative database.
We propose a new approach, called as functional deep neural network (FDNN), for classifying multidimensional functional data. Specifically, a deep neural network is trained based on the principal components of the training data which shall be used to predict the class label of a future data function. Unlike the popular functional discriminant analysis approaches which only work for one‐dimensional functional data, the proposed FDNN approach applies to general non‐Gaussian multidimensional functional data. Moreover, when the log density ratio possesses a locally connected functional modular structure, we show that FDNN achieves minimax optimality. The superiority of our approach is demonstrated through both simulated and real‐world datasets.
Using a three-dimensional magnetohydrodynamic model, we simulate the magnetic reconnection in a single current sheet. We assume a finite guide field, a random perturbation on the velocity field and uniform resistivity.Our model enhances the reconnection rate relative to the classical Sweet-Parker model in the same configuration. The efficiency of magnetic energy conversion is increased by interactions between the multiple tearing layers coexisting in the global current sheet. This interaction, which forms a positive-feedback system, arises from coupling of the inflow and outflow regions in different layers across the current sheet. The coupling accelerates the elementary reconnection events, thereby enhancing the global reconnection rate. The reconnection establishes flux tubes along each tearing layer. Slow-mode shocks gradually form along the outer boundaries of these tubes, further accelerating the magnetic energy conversion. Such positive-feedback system is absent in two-dimensional simulation, three-dimensional reconnection without a guide field and a reconnection under a single perturbation mode. We refer to our model as the "shock-evoking positivefeedback" model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.