Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2016
DOI: 10.1007/s10994-016-5545-0
|View full text |Cite
|
Sign up to set email alerts
|

Swamping and masking in Markov boundary discovery

Abstract: This paper considers the problems of swamping and masking in Markov boundary discovery for a target variable. There are two potential reasons for swamping and masking: one is incorrectness of some conditional independence (CI) tests, and the other is violation of local composition. First, we explain why the incorrectness of CI tests may lead to swamping and masking, analyze how to reduce the incorrectness of CI tests, and build an algorithm called LRH under local composition. For convenience, we integrate the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 20 publications
(39 reference statements)
0
10
0
Order By: Relevance
“…When the faithfulness assumption is violated, the MB of learnt from the data may not be unique (Pena et al, 2007;Statnikov et al, 2013). To deal with the violation of the faithfulness assumption, some research work has been done for identifying multiple MBs without the assumption, such as KIAMB (Pena et al, 2007), TIE* (Statnikov et al, 2013), SGAI (Yu et al, 2017), LCMB (Liu and Liu, 2016), and WLCMB (Liu and Liu, 2016). KIAMB was the first attempt to learn multiple MBs, but it needs to run multiple times and cannot guarantee finding all possible MBs of the class variable.…”
Section: Overview Of Constraint-based Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…When the faithfulness assumption is violated, the MB of learnt from the data may not be unique (Pena et al, 2007;Statnikov et al, 2013). To deal with the violation of the faithfulness assumption, some research work has been done for identifying multiple MBs without the assumption, such as KIAMB (Pena et al, 2007), TIE* (Statnikov et al, 2013), SGAI (Yu et al, 2017), LCMB (Liu and Liu, 2016), and WLCMB (Liu and Liu, 2016). KIAMB was the first attempt to learn multiple MBs, but it needs to run multiple times and cannot guarantee finding all possible MBs of the class variable.…”
Section: Overview Of Constraint-based Methodsmentioning
confidence: 99%
“…In this subsection, we will discuss six representative MB learning algorithms for tackling the situation where the faithfulness or causal sufficiency assumption is violated, i.e., KI-AMB (Pena et al, 2007), TIE* (Statnikov et al, 2013), SGAI (Yu et al, 2017), LCMB (Liu and Liu, 2016), WLCMB (Liu and Liu, 2016), and M3B (Yu et al, 2018c).…”
Section: Methods Of Mb Learning With Relaxed Assumptionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Tsamardinos et al (2003b) improved on this work by adding those variables to the candidate MB having the strongest dependencies with the target in advance. While employing the same statistical tests and heuristics, another algorithm that learns the direct neighbors and spouses separately has proven superior, and hence has been widely adopted in later constraint-based methods (see Aliferis et al (2003); Tsamardinos et al (2003a); Peña et al (2007); Fu and Desmarais (2008); Aliferis et al (2010a); de Morais and Aussem (2010); Liu and Liu (2016)). In particular, this improved strategy looks at the dependencies with the target at a distance of one and two neighbors separately.…”
Section: Related Workmentioning
confidence: 99%
“…The experimental assembly and high-P experimental technique were generally identical to those reported in. 21 Our sample was synthesized at 1 GPa and 800 C with a heating time of 24 hours.…”
Section: Synthesismentioning
confidence: 99%