2019 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB) 2019
DOI: 10.1109/cibcb.2019.8791489
|View full text |Cite
|
Sign up to set email alerts
|

FRI-Feature Relevance Intervals for Interpretable and Interactive Data Exploration

Abstract: Most existing feature selection methods are insufficient for analytic purposes as soon as high dimensional data or redundant sensor signals are dealt with since features can be selected due to spurious effects or correlations rather than causal effects. To support the finding of causal features in biomedical experiments, we hereby present FRI, an open source Python library that can be used to identify all-relevant variables in linear classification and (ordinal) regression problems. Using the recently proposed… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 33 publications
1
3
0
Order By: Relevance
“…One can see that the complexity can limit the application of the method to small to medium-sized problems. This is in line with other all relevant feature selection methods [37] which exhibit much higher runtimes than simple sparse methods. While slightly bigger sets with, e.g.…”
Section: Equivalence Of Minrel(i) and Minrel * (I)supporting
confidence: 88%
See 1 more Smart Citation
“…One can see that the complexity can limit the application of the method to small to medium-sized problems. This is in line with other all relevant feature selection methods [37] which exhibit much higher runtimes than simple sparse methods. While slightly bigger sets with, e.g.…”
Section: Equivalence Of Minrel(i) and Minrel * (I)supporting
confidence: 88%
“…Because the dz + c LPs are a significant factor, we proposed to solve them in parallel [37] which we evaluate in Appendix B.1.…”
Section: Time Complexitymentioning
confidence: 99%
“…For example, Auffarth et al [ 28 ] write that “redundancy measures how similar features are”. Chakraborty et al [ 29 ] and Pfannschmidt et al [ 30 , 31 ] argue that features or variables include redundancy if not all relevant features are required for a target application, that is, there exists no unique minimum feature set to solve a given task. This kind of redundancy, based on similarity of information, is in this work hereafter referred to as Redundancy Type II.…”
Section: Redundancy In Related Workmentioning
confidence: 99%
“…(I) pre-model interpretability, which involves data exploration techniques such as PCA, k-means clustering (Alpaydin, 2020) and t-ditributed Stochastic Neighbor Embedding (tSNE) (Maaten and Hinton, 2008); (II) post-model interpretability, in which model-agnostic techniques are applied on blackbox models to analyze them locally, such as Local interpretable model-agnostic explanations (LIME) (Arrieta et al, 2020), DeepView (Schulz et al, 2020), Feature Relevance Information (FRI) (Pfannschmidt et al, 2019) and SHapley Additive ex-PLanations (SHAP)(Lundberg and Lee, 2017) just to name a few;…”
Section: Introductionmentioning
confidence: 99%