2022
DOI: 10.1007/s10115-022-01755-9
|View full text |Cite
|
Sign up to set email alerts
|

Multiresolution hierarchical support vector machine for classification of large datasets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(4 citation statements)
references
References 26 publications
0
1
0
Order By: Relevance
“…This study utilizes the hierarchical SVM construction method, categorizing the music performance movements recorded by wearable sensors into two initial motion subcategories. 6 It further refines these motion subcategories until a singular motion type classification is reached. Initially, based on the intensity of the music performance data captured by the wearable sensors, the movements of music performers are categorized into either gentle or intense motions.…”
Section: Classification Of Performance Skills Of Music Motionsmentioning
confidence: 99%
See 2 more Smart Citations
“…This study utilizes the hierarchical SVM construction method, categorizing the music performance movements recorded by wearable sensors into two initial motion subcategories. 6 It further refines these motion subcategories until a singular motion type classification is reached. Initially, based on the intensity of the music performance data captured by the wearable sensors, the movements of music performers are categorized into either gentle or intense motions.…”
Section: Classification Of Performance Skills Of Music Motionsmentioning
confidence: 99%
“…Since the eigenvector is formed by the linear combination of the elements of the projection space, the existence coefficient 𝛾 ij satisfies the Equation (6).…”
Section: Dimensionality Reductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The data's diversity also could extract a subset of attributes from the original attributes as a training set, generally accepted methods are random feature bagging [7] and empowerment feature generation bagging [8]. In this paper, the member classifiers are designed based on the Support vector machine [9], on the basis of the selfservice sampling strategy, the features are randomly selected based on the importance of the features as the probability of the features being selected. The selection of samples and features through the dual random process not only distinguishes the data, but also preserves as much sample information as possible.…”
Section: Introductionmentioning
confidence: 99%