Copulas are full measures of dependence among components of random vectors. Unlike the marginal and the joint distributions which are directly observable, a copula is a hidden dependence structure that couples the marginals and the joint distribution. This makes the task of proposing a parametric copula model non-trivial and is where a nonparametric estimator can play a significant role. In this paper, we investigate a kernel estimator which is mean square consistent everywhere in the support of the copula function. The kernel estimator is then used to formulate a goodness-of-fit test for parametric copula models. Copulas are full measures of dependence among components of random vectors. Unlike the marginal and the joint distributions which are directly observable, a copula is a hidden dependence structure that couples the marginals and the joint distribution. This makes the task of proposing a parametric copula model non-trivial and is where a nonparametric estimator can play a significant role. In this paper, we investigate a kernel estimator which is mean square consistent everywhere in the support of the copula function. The kernel estimator is then used to formulate a goodness-of-fit test for parametric copula models.
In this paper, the maximal nonlinear conditional correlation of two random vectors X and Y given another random vector Z, denoted by ρ1(X, Y |Z), is defined as a measure of conditional association, which satisfies certain desirable properties. When Z is continuous, a test for testing the conditional independence of X and Y given Z is constructed based on the estimator of a weighted average of the form n Z k=1 fZ (z k )ρ 2 1 (X, Y |Z = z k ), where fZ is the probability density function of Z and the z k 's are some points in the range of Z. Under some conditions, it is shown that the test statistic is asymptotically normal under conditional independence, and the test is consistent.
The goal of this paper is to provide theorems on convergence rates of
posterior distributions that can be applied to obtain good convergence rates in
the context of density estimation as well as regression. We show how to choose
priors so that the posterior distributions converge at the optimal rate without
prior knowledge of the degree of smoothness of the density function or the
regression function to be estimated.Comment: Published by the Institute of Mathematical Statistics
(http://www.imstat.org) in the Annals of Statistics
(http://www.imstat.org/aos/) at http://dx.doi.org/10.1214/00905360400000049
Consider a nonlinear partial spline model Y = f (β 0 , X ) + g 0 (T ) + . This article studies the estimation problem of β 0 when g 0 is approximated by some graduating function. Some asymptotic results for β 0 are derived. In particular, it is shown that β 0 can be estimated with the usual parametric convergence rate without undersmoothing g 0 .
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.