2019 IEEE 35th International Conference on Data Engineering (ICDE) 2019
DOI: 10.1109/icde.2019.00033
|View full text |Cite
|
Sign up to set email alerts
|

DBSVEC: Density-Based Clustering Using Support Vector Expansion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…Many pretext tasks were found to be conducive to learn image features. For example, image colorization [40,41], super-resolution [21], image processing [39,3], jigsaw puzzles [26], rotation angle prediction [12] and unsupervised deep clustering [2,34]. These methods can learn desired and transferable representations that achieve promising results in downstream tasks.…”
Section: Self-supervised Learning With Pretext Tasksmentioning
confidence: 99%
“…Many pretext tasks were found to be conducive to learn image features. For example, image colorization [40,41], super-resolution [21], image processing [39,3], jigsaw puzzles [26], rotation angle prediction [12] and unsupervised deep clustering [2,34]. These methods can learn desired and transferable representations that achieve promising results in downstream tasks.…”
Section: Self-supervised Learning With Pretext Tasksmentioning
confidence: 99%
“…Instead, we use embeddings as the network outputs, which naturally allow for modeling emerging new classes, and do not require direct changes to the network structure. Embedding networks map data into a low-dimensional output, where similar data are clustered together and dissimilar data are far apart (Chopra et al 2005;Wang et al 2019bWang et al , 2021a. In the learned space, general metrics, such as L2-distance, can be applied to determine the similarities between the original data.…”
Section: Deep Retrieval For Embeddingsmentioning
confidence: 99%
“…Emotion: Support vector machine (Atkinson and Campos 2016;Wang et al 2019) and random forest (Liu et al 2016) are first explored for two-category classification. Besides, (Tuncer, Dogan, and Subasi 2021) proposed a fractal pattern feature extraction approach for emotion recognition.…”
Section: Motor Imaginary (Mi)mentioning
confidence: 99%