2021
DOI: 10.1016/j.ins.2021.05.069
|View full text |Cite
|
Sign up to set email alerts
|

Twin support vector machines with privileged information

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 41 publications
0
1
0
Order By: Relevance
“…• Supervised: AdaBoost with PI (Chen et al [21], Liu et al [22]), SVM+ for risk modelling (Ribeiro et al [23], [24]), SVM+ and multi-task learning (Liang and Cherkassky [25], Liang and Cherkassky [26], Liang et al [27], Cai and Cherkassky [28], Tang et al [29]), Regression Forests for facial feature detection with privileged head pose or gender (Yang and Patras [30]), image classification using privileged attributes, bounding box annotations, and textual descripting tags (Sharmanska et al [31], Sharmanska et al [32], Li et al [33], Wang and Ji [34], Yan et al [35], Rodríguez et al [36]), structured SVM (SSVM) prediction algorithm for image object localization using PI (Feyereisl et al [37]), unifying distillation and PI (Lopez-Paz et al [38]), multi-instance learning for action and event recognition with privileged web data (Niu et al [39]), knowledge transfer for neural networks (Vapnik and Izmailov [8]), image object detection using PI (Hoffman et al [40]), domain adaptation (Sarafianos et al [41]), multiview privileged SVMs (Tang et al [42]), deep learning under PI (Lambert et al [43]), PI for structured output prediction (Zhang et al [44]), label enhancement with multi-label learning (Zhu et al [45]), PI for the diagnosis of Alzheimer's disease (Li et al [46], Ganaie and Tanveer [47]), breast (Shaikh et al [48]) and liver (Zhang et al [49]) cancers, PI for image super-resolution using CNNs (Lee et al [50]), robust SVM+ (Li et al [9], Wu et al [51]), twin SVM with PI (Che et al [52]), robust twin SVM+ (Li et al [53]), Support Vector ...…”
Section: Related Work a Privileged Informationmentioning
confidence: 99%
“…• Supervised: AdaBoost with PI (Chen et al [21], Liu et al [22]), SVM+ for risk modelling (Ribeiro et al [23], [24]), SVM+ and multi-task learning (Liang and Cherkassky [25], Liang and Cherkassky [26], Liang et al [27], Cai and Cherkassky [28], Tang et al [29]), Regression Forests for facial feature detection with privileged head pose or gender (Yang and Patras [30]), image classification using privileged attributes, bounding box annotations, and textual descripting tags (Sharmanska et al [31], Sharmanska et al [32], Li et al [33], Wang and Ji [34], Yan et al [35], Rodríguez et al [36]), structured SVM (SSVM) prediction algorithm for image object localization using PI (Feyereisl et al [37]), unifying distillation and PI (Lopez-Paz et al [38]), multi-instance learning for action and event recognition with privileged web data (Niu et al [39]), knowledge transfer for neural networks (Vapnik and Izmailov [8]), image object detection using PI (Hoffman et al [40]), domain adaptation (Sarafianos et al [41]), multiview privileged SVMs (Tang et al [42]), deep learning under PI (Lambert et al [43]), PI for structured output prediction (Zhang et al [44]), label enhancement with multi-label learning (Zhu et al [45]), PI for the diagnosis of Alzheimer's disease (Li et al [46], Ganaie and Tanveer [47]), breast (Shaikh et al [48]) and liver (Zhang et al [49]) cancers, PI for image super-resolution using CNNs (Lee et al [50]), robust SVM+ (Li et al [9], Wu et al [51]), twin SVM with PI (Che et al [52]), robust twin SVM+ (Li et al [53]), Support Vector ...…”
Section: Related Work a Privileged Informationmentioning
confidence: 99%
“…The training data can be grouped according to a feature attribute, and a formal optimization formula is sorted out. Che et al (2021) proposed a twin support vector machine model for privileged information learning based on LUPI paradigm . Some other models have been improved in the SVM with privileged information to make the model more robust (Li et al, 2021).…”
Section: Introductionmentioning
confidence: 99%