2020
DOI: 10.1016/j.ifacol.2020.06.106
|View full text |Cite
|
Sign up to set email alerts
|

A new index for information gain in the Bayesian framework⁎

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2

Relationship

3
2

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…From Fig 3a, we can observe that the change in experimental condition can cure sloppiness, which indicates that sloppiness is a function of information contained in a data set. In our previous work [37], we have demonstrated using simulations that parameters contributing to…”
Section: S ¼ L Min ðXþ L Max ðXþ ð5þmentioning
confidence: 96%
“…From Fig 3a, we can observe that the change in experimental condition can cure sloppiness, which indicates that sloppiness is a function of information contained in a data set. In our previous work [37], we have demonstrated using simulations that parameters contributing to…”
Section: S ¼ L Min ðXþ L Max ðXþ ð5þmentioning
confidence: 96%
“…4b, we can observe that the change in experimental condition can cure sloppiness, which indicates that the sloppiness is a function of information contained in a data set. In our previous work [27], we have demonstrated using simulation that parameters contributing to sloppy directions have very low information gain in the Bayesian framework. For initial condition > 50 of x 2 , the model becomes non-sloppy.…”
Section: Perspectivesmentioning
confidence: 99%
“…Information Gain. Information gain [28] is a supervised feature selection methods which is used to rank the feature according to the word's contribution based on its presence or absence in a particular set of text inputs [29]. IG is calculated as…”
Section: Related Workmentioning
confidence: 99%