2020
DOI: 10.48550/arxiv.2012.14804
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Kernel Partial Correlation Coefficient -- a Measure of Conditional Dependence

Abstract: In this paper we propose and study a class of simple, nonparametric, yet interpretable measures of conditional dependence between two random variables Y and Z given a third variable X, all taking values in general topological spaces. The population version of any of these nonparametric measures -defined using the theory of reproducing kernel Hilbert spaces (RKHSs) -captures the strength of conditional dependence and it is 0 if and only if Y and Z are conditionally independent given X, and 1 if and only if Y is… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
12
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 100 publications
(211 reference statements)
1
12
0
Order By: Relevance
“…Conditional independence testing with CE is a informationtheoretical based method. It is mathematically sound and hence advantageous to other similar tools for testing CI, such as partial correlation, conditional distance correlation [11], kernel-based conditional independence tests [12], conditional dependence coefficient [13], generalised covariance measure [14], and kernel partial correlation [15], etc. Mooij et al [10] applied causal structure learning algorithm to find the causal relationship.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Conditional independence testing with CE is a informationtheoretical based method. It is mathematically sound and hence advantageous to other similar tools for testing CI, such as partial correlation, conditional distance correlation [11], kernel-based conditional independence tests [12], conditional dependence coefficient [13], generalised covariance measure [14], and kernel partial correlation [15], etc. Mooij et al [10] applied causal structure learning algorithm to find the causal relationship.…”
Section: Discussionmentioning
confidence: 99%
“…Conditional Independence (CI) test is the basic building block of causal discovery methods. There are many non-parametric methods for such testing, such as conditional distance correlation [11], kernel-based conditional independence tests [12], conditional dependence coefficient [13], generalised covariance measure [14], and the basic and kernel partial correlation [15], etc.…”
Section: Introductionmentioning
confidence: 99%
“…Strobl et al (2019) use random Fourier features to approximate kernel computations, and propose a more computationally efficient version of the test of Zhang et al (2012). Other CI measures proposed by Doran et al (2014) and Huang et al (2020) are motivated by the kernel maximum mean discrepancy for two-sample testing (Gretton et al, 2012). In particular, the CI measure introduced by Huang et al (2020) compares whether Y |X, Z and Y |X have the same distribution, and their measure can be viewed as a kernelized version of the CI measure of Azadkia and Chatterjee (2019).…”
Section: Related Workmentioning
confidence: 99%
“…Other CI measures proposed by Doran et al (2014) and Huang et al (2020) are motivated by the kernel maximum mean discrepancy for two-sample testing (Gretton et al, 2012). In particular, the CI measure introduced by Huang et al (2020) compares whether Y |X, Z and Y |X have the same distribution, and their measure can be viewed as a kernelized version of the CI measure of Azadkia and Chatterjee (2019). Recently, Sheng and Sriperumbudur (2019) and Park and Muandet (2020) propose kernel CI measures that are closely connected to the Hillbert-Schmidt independence criterion (Gretton et al, 2005).…”
Section: Related Workmentioning
confidence: 99%
“…In particular, it yields an exactly distribution-free test of independence which is consistent against all fixed alternatives, computable in (near) linear time and has a simple null distribution theory. Consequently, it has attracted much attention over the past year both in terms of applications and theory (see [7,35,12,2,24,18,11]). Despite its gaining popularity, not much is known theoretically about the power of Chatterjee's test of independence beyond consistency against fixed alternatives.…”
Section: Introductionmentioning
confidence: 99%