2021
DOI: 10.1016/j.patcog.2020.107592
|View full text |Cite
|
Sign up to set email alerts
|

Solving large-scale support vector ordinal regression with asynchronous parallel coordinate descent algorithms

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 26 publications
0
6
0
Order By: Relevance
“…A plenty of research works are performed in ordinal regression [4,7,12,14,16,17,38]. Among these, Wenzhi et al [22] uses a neural network architecture with a non-traditional loss function which is particularly suited for the ordinal regression task.…”
Section: Related Workmentioning
confidence: 99%
“…A plenty of research works are performed in ordinal regression [4,7,12,14,16,17,38]. Among these, Wenzhi et al [22] uses a neural network architecture with a non-traditional loss function which is particularly suited for the ordinal regression task.…”
Section: Related Workmentioning
confidence: 99%
“…Therefore, both the algorithms and the evaluation metrics have to be adapted to face this problem and evaluate their solutions adequately. For this purpose, traditional methods such as support vector machines have been adapted to this problem by imposing constraints in order to respect the ordering of the classes (Chu & Keerthi, 2007;Gu et al, 2020), or by reducing to binary problems in an appropriate way (Lin & Li, 2012). Other proposals try to adapt conventional loss or gain functions in order to handle ordinal data in the best possible way.…”
Section: Ordinal Regression and Similarity Approachesmentioning
confidence: 99%
“…Ordinary regression has been successfully applied to a great number of real-world problems. Typical ordinary regression trains models in a batch manner, i.e., feeding all data into training model at once [6][7][8]. e batch training manner demands huge computation and memory for large-scale problems, and it is also not adaptable to streaming data.…”
Section: Learning To Rank In Online Ordinary Regressionmentioning
confidence: 99%
“…e lower interval endpoint can be defined similarly. Let L IMC I (f(x), θ, y l , y r ) denote the implicit constraints for ordering of thresholds in θ i , and I stands for the interval [8].…”
Section: Learning To Rank In Online Ordinary Regressionmentioning
confidence: 99%
See 1 more Smart Citation