2021
DOI: 10.3390/en14227609
|View full text |Cite
|
Sign up to set email alerts
|

Power Profile and Thresholding Assisted Multi-Label NILM Classification

Abstract: Next-generation power systems aim at optimizing the energy consumption of household appliances by utilising computationally intelligent techniques, referred to as load monitoring. Non-intrusive load monitoring (NILM) is considered to be one of the most cost-effective methods for load classification. The objective is to segregate the energy consumption of individual appliances from their aggregated energy consumption. The extracted energy consumption of individual devices can then be used to achieve demand-side… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 50 publications
0
2
0
Order By: Relevance
“…Classic ML algorithms utilized in reviewed work are Random k-labELset (RAkEL) [11], [29], [34], factorial Hidden Markov Model (fHMM) [26], Random Forrest (RF) [11], [35], Sparse Representation based Classification (SRC) [30], Classification And Regression Tree (CART) [35], Extra Tree (ET) [35], k-Nearest Neighbors (kNN) [11], [35], Linear Discrimination Analysis (LDA) [35] and Naïve Bayes (NB) [35].…”
Section: B Methods For Solving Nilm Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Classic ML algorithms utilized in reviewed work are Random k-labELset (RAkEL) [11], [29], [34], factorial Hidden Markov Model (fHMM) [26], Random Forrest (RF) [11], [35], Sparse Representation based Classification (SRC) [30], Classification And Regression Tree (CART) [35], Extra Tree (ET) [35], k-Nearest Neighbors (kNN) [11], [35], Linear Discrimination Analysis (LDA) [35] and Naïve Bayes (NB) [35].…”
Section: B Methods For Solving Nilm Problemsmentioning
confidence: 99%
“…Their findings indicate that Random Forest (RF) outperforms other learning algorithms. Similarly, Rehmani et al [35] demonstrated that computationally intensive deep learning (DL) algorithms, such as CNN and RNN, were not required for their particular datasets, as already classical machine learning algorithms, such as kNN and RF, yielded accuracy of 99 %. However, openly available and well documented REFIT and UK-DALE datasets, recently used in many reference works as well as in this study, do not exhibit suitable performance with the classical machine learning and are used with DL models.…”
Section: B Methods For Solving Nilm Problemsmentioning
confidence: 99%