2023
DOI: 10.1109/tpami.2022.3225461
|View full text |Cite
|
Sign up to set email alerts
|

Micro-Supervised Disturbance Learning: A Perspective of Representation Probability Distribution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 48 publications
0
2
0
Order By: Relevance
“…The continuous growth of such data presents unparalleled challenges to the field of clustering research. Unlike static data, the characteristics of time series data are characterized by inherent multi-scale temporal dependencies that reflect the data’s dynamic changes, including both long-term and short-term pattern variations [ 5 , 6 ]. This unique temporal characteristic renders most traditional clustering algorithms unsuitable for time series data, as they often fail to capture the dynamic essence of such data.…”
Section: Introductionmentioning
confidence: 99%
“…The continuous growth of such data presents unparalleled challenges to the field of clustering research. Unlike static data, the characteristics of time series data are characterized by inherent multi-scale temporal dependencies that reflect the data’s dynamic changes, including both long-term and short-term pattern variations [ 5 , 6 ]. This unique temporal characteristic renders most traditional clustering algorithms unsuitable for time series data, as they often fail to capture the dynamic essence of such data.…”
Section: Introductionmentioning
confidence: 99%
“…In recent years, deep learning has experienced rapid advancements, leading to the widespread utilization of deep neural networks [1][2] across various domains. Deep neural networks typically exhibit a hierarchical structure with numerous parameters, necessitating a substantial amount of data for training and parameter adjustment.…”
Section: Introductionmentioning
confidence: 99%
“…However, in many practical application scenarios, it is often difficult for people to obtain enough data to support the training of deep learning models. Few-shot learning, on the other hand, has shown promising results with only a small number of samples [1] . Hence, it is crucial to investigate few-shot learning methods that offer high accuracy and superior generalization performance, particularly in scenarios where abundant training data is inaccessible.…”
Section: Introductionmentioning
confidence: 99%