2019
DOI: 10.3390/make1010026
|View full text |Cite
|
Sign up to set email alerts
|

Model Selection Criteria on Beta Regression for Machine Learning

Abstract: Beta regression models are a class of supervised learning tools for regression problems with univariate and limited response. Current fitting procedures for beta regression require variable selection based on (potentially problematic) information criteria. We propose model selection criteria that take into account the leverage, residuals, and influence of the observations, both to systematic linear and nonlinear components. To that end, we propose a Predictive Residual Sum of Squares (PRESS)-like machine learn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
14
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 19 publications
(19 citation statements)
references
References 23 publications
0
14
0
Order By: Relevance
“…, such that ψ(•) denoting the digamma function, and g ′ (•) is the first derivative of g(•). The iterative reweighted least-squares (IWLS) algorithm or Fisher's scoring algorithm was used for estimating β [34,35]. The form of this algorithm can be written as:…”
Section: Methodology Beta Regression Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…, such that ψ(•) denoting the digamma function, and g ′ (•) is the first derivative of g(•). The iterative reweighted least-squares (IWLS) algorithm or Fisher's scoring algorithm was used for estimating β [34,35]. The form of this algorithm can be written as:…”
Section: Methodology Beta Regression Modelmentioning
confidence: 99%
“…ββ is the information matrix for β , as shown in References Espinheira et al [35] for more details. The initial value of β can be obtained by the least-squares estimation, while the initial value for each precision parameter is:…”
Section: Methodology Beta Regression Modelmentioning
confidence: 99%
“…To get the estimated vector of γ, one can use the iterative reweighted least square (IWLS) method or the Fisher scoring method 41,42 . After a few iterations are performed, then convergence happens as the gap between consecutive estimates becomes less than a significant constant.…”
Section: Beta Regression Modelingmentioning
confidence: 99%
“…We have tried two different link functions: the logit and the complementary log-log. According to [17], when the mean of the response variable is near 1, the complementary log-log link function for the mean softens the impact of influent points in maximum likelihood estimation. In this sense, we consider here the logit and the complementary log-log link functions and check the performances of both modelings.…”
Section: An Application: Fluid Catalytic Cracking (Fcc)mentioning
confidence: 99%
“…Many expressions in (17) differ only in the parameter of interest. More specifically, they differ as to the derivatives of η 1 and η 2 with respect to these parameters.…”
mentioning
confidence: 99%