2022
DOI: 10.1016/j.patcog.2021.108263
|View full text |Cite
|
Sign up to set email alerts
|

Deep and interpretable regression models for ordinal outcomes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 24 publications
(17 citation statements)
references
References 9 publications
1
15
0
Order By: Relevance
“…In fact, the experimental findings demonstrated how a standard CCE together with CLM is sufficient to model ordinal structure of the label, without requiring the minimization of an ordinal loss (e.g., QWK). This is also in line with recent findings in the ordinal classification literature [44]. Moreover, the ordinal constraints allow the network to learn the characteristics that properly describe the quality of shotgun (i.e., wood grains), rather than other confounds/bias characteristics (e.g., geometry).…”
Section: Workpacesupporting
confidence: 86%
“…In fact, the experimental findings demonstrated how a standard CCE together with CLM is sufficient to model ordinal structure of the label, without requiring the minimization of an ordinal loss (e.g., QWK). This is also in line with recent findings in the ordinal classification literature [44]. Moreover, the ordinal constraints allow the network to learn the characteristics that properly describe the quality of shotgun (i.e., wood grains), rather than other confounds/bias characteristics (e.g., geometry).…”
Section: Workpacesupporting
confidence: 86%
“…The transformation function (upper right panel) transforms the complex, bimodal distribution of (lower panel) to , the standard minimum extreme value distribution (upper left panel). An analogous figure for ordinal outcomes is published in Kook et al ( 2022 , Fig. 1).…”
Section: Introductionmentioning
confidence: 66%
“…The implementation of the interval censored log-likelihood for ordinal TMs was taken from Kook et al. ( 2022 ) and we used SGD with the Adam optimizer (Kingma and Ba 2015 ) with learning rate , batch size 250 and 200 epochs. Parameters were initialized with the maximum likelihood estimate for obtained via tram::Polr() (Hothorn 2020 ).…”
Section: A Notationmentioning
confidence: 99%
See 1 more Smart Citation
“…For the time being, we leave a more runtime efficient version of our framework to future implementation and research. Finally, we consider the extension of our framework to allow for other Normalizing Flow types, suitable for count or ordinal data as proposed by Kook et al (2021), as an interesting enhancement. 3127 [3.6087, 7.9982] 10.655 [10.4204, 11.3639] 25.0991 [22.4859, 25.8206] 6.1083 [5.6999, 7.4556] 14.877 [14.1427, 19.3816] The table shows median CRPS scores across all folds, with interquartile-range values in parentheses, i.e., q0.…”
Section: Conclusion Limitations and Futurementioning
confidence: 99%