2015
DOI: 10.1007/978-3-319-23528-8_22
|View full text |Cite
|
Sign up to set email alerts
|

Markov Blanket Discovery in Positive-Unlabelled and Semi-supervised Data

Abstract: Abstract. The importance of Markov blanket discovery algorithms is twofold: as the main building block in constraint-based structure learning of Bayesian network algorithms and as a technique to derive the optimal set of features in filter feature selection approaches. Equally, learning from partially labelled data is a crucial and demanding area of machine learning, and extending techniques from fully to partially supervised scenarios is a challenging problem. While there are many different algorithms to deri… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Because the correction factor is a constant that depends on the amount of labeled data, one can calculate how much more data is required to get the desired power [90]. The conditional test of independence, which was used for learning the PTAN trees, has similar properties [9,88]. For feature selection, one is interested in ranking the features in order of mutual information between the features and the label.…”
Section: Hypothesis Testingmentioning
confidence: 99%
“…Because the correction factor is a constant that depends on the amount of labeled data, one can calculate how much more data is required to get the desired power [90]. The conditional test of independence, which was used for learning the PTAN trees, has similar properties [9,88]. For feature selection, one is interested in ranking the features in order of mutual information between the features and the label.…”
Section: Hypothesis Testingmentioning
confidence: 99%
“…This application of our work was first presented in Sechidis and Brown (2015). Firstly, we will show how we can use surrogate variables to derive the MB of positive-unlabelled nodes, a scenario where BASSUM cannot be applied.…”
Section: Application 1: Semi-supervised Markov Blanket Discoverymentioning
confidence: 99%
“…As we have illustrated in example (1), the output of an FS procedure is not always guaranteed to be of constant cardinality. Examples of such FS procedures are in feature selection by hypothesis testing [15]. For this reason, several attempts at extending this measure to feature sets of varying cardinality have been made in the literature, somehow losing some of the important properties.…”
Section: Quantifying Stabilitymentioning
confidence: 99%