2024
DOI: 10.5705/ss.202022.0057
|View full text |Cite
|
Sign up to set email alerts
|

Optimal Classification for Functional Data

Abstract: A central topic in functional data analysis is how to design an optimal decision rule, based on training samples, to classify a data function. We exploit the optimal classification problem when data functions are Gaussian processes. Sharp convergence rates for minimax excess misclassification risk are derived in both settings that data functions are fully observed and discretely observed. We explore two easily implementable classifiers based on discriminant analysis and deep neural network, respectively, which… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
16
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(16 citation statements)
references
References 21 publications
0
16
0
Order By: Relevance
“…On the other hand, under functional Gaussian data, researchers have proposed various functional classifiers, including functional quadratic discriminant analysis (FQDA) (Berrendero et al, 2018; Cai & Zhang, 2019a, 2019b; Dai et al, 2017; Delaigle et al, 2012; Delaigle & Hall, 2012, 2013; Galeano et al, 2015; Park et al, 2020; Shin, 2008). Gaussianity leads to a linear or quadratic polynomial Q$$ {Q}^{\ast } $$ which can be effectively estimated by FQDA, based on which Wang, Shang, et al (2021) showed that FQDA is minimax optimal. It is still unclear how to design optimal functional classifiers when data are non‐Gaussian, a gap that the present article attempts to close.…”
Section: Minimax Optimality Of Fdnnmentioning
confidence: 99%
See 2 more Smart Citations
“…On the other hand, under functional Gaussian data, researchers have proposed various functional classifiers, including functional quadratic discriminant analysis (FQDA) (Berrendero et al, 2018; Cai & Zhang, 2019a, 2019b; Dai et al, 2017; Delaigle et al, 2012; Delaigle & Hall, 2012, 2013; Galeano et al, 2015; Park et al, 2020; Shin, 2008). Gaussianity leads to a linear or quadratic polynomial Q$$ {Q}^{\ast } $$ which can be effectively estimated by FQDA, based on which Wang, Shang, et al (2021) showed that FQDA is minimax optimal. It is still unclear how to design optimal functional classifiers when data are non‐Gaussian, a gap that the present article attempts to close.…”
Section: Minimax Optimality Of Fdnnmentioning
confidence: 99%
“…For instance, in either high‐ or infinite‐dimensional Gaussian data, it is well known that density ratio between two Gaussian population densities has an explicit expression in terms of mean difference and variance ratio, which impacts the sharp rate of MEMR. More precisely, in high‐dimensional Gaussian data classification, the rate depends on the number of nonzero components of mean difference vector (Cai & Zhang, 2019a, 2019b); in Gaussian functional data classification, the rate depends on the decay orders of both mean difference series and variance ratio series (Wang, Shang, et al, 2021). Nonetheless, in general non‐Gaussian case, likelihood ratio does not have an explicit expression, therefore, one cannot simply use mean or variance discrepancy to characterize the sharp rate of MEMR.…”
Section: Minimax Optimality Of Fdnnmentioning
confidence: 99%
See 1 more Smart Citation
“…Hence, t u can be viewed as an intrinsic dimension of g u . Structure (6) has been adopted by [21,20,25,14,17] in multivariate regression using deep learning to address the "curse of dimensionality." Examples of ( 6) include generalized additive model [9,16], tensor product space ANOVA model [15], among others.…”
Section: Minimax Optimality In High Dimensionsmentioning
confidence: 99%
“…Hence, tu can be viewed as an intrinsic dimension of gu. Structure (6) has been adopted by Schmidt‐Hieber (2020), Polson and Roˇcková (2018), Wang et al (2021), Li et al (2021) and Liu et al (2021) in multivariate regression using deep learning to address the “curse of dimensionality.” Examples of (6) include generalized additive models (Hastie & Tibshirani, 1990; Liu et al, 2022), tensor product space ANOVA model (Lin, 2000), among others. Specifically, the former corresponds to g0false(bold-italicxfalse)=false(fjfalse(xjfalse)false)jscriptJ with f1,,fd being univariate smooth functions, scriptJfalse{1,,dfalse} being a set of d1 indexes and g1false(z1,,zd1false)=z1++zd1 so that d0=d, d2=1, t0=1, t1=d1; the latter corresponds to g0false(bold-italicxfalse)=false(fj1false(xj1false)f…”
Section: Deep Neural Network Classifier and Its Minimax Optimalitymentioning
confidence: 99%