2022 IEEE 11th Data Driven Control and Learning Systems Conference (DDCLS) 2022
DOI: 10.1109/ddcls55054.2022.9858445
|View full text |Cite
|
Sign up to set email alerts
|

A Self-training Multi-task Attention Method for NILM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 17 publications
0
2
0
Order By: Relevance
“…However, both HMM-based methods and machine learning methods, such as SVM and KNN-based methods, require manual feature extraction. In contrast, deep neural network methods can automatically extract features without human involvement [17].…”
Section: Introductionmentioning
confidence: 99%
“…However, both HMM-based methods and machine learning methods, such as SVM and KNN-based methods, require manual feature extraction. In contrast, deep neural network methods can automatically extract features without human involvement [17].…”
Section: Introductionmentioning
confidence: 99%
“…Other than the multi-task learning, some researchers have explored the effectiveness of different learning strategies for NILM. For example, Li et al proposed a self-training multi-task attention method [28], which employs a dual-branch network structure like SGN but with single-task training considering the discrepancy between the network's disaggregation results and the labels. In 2023, Chen et al incorporate self-supervised learning into NILM [29], where the network is pre-trained on an unlabeled dataset using a self-supervised pretext task and then fine-tuned using a labeled dataset in the downstream task.…”
mentioning
confidence: 99%