2024
DOI: 10.3390/electronics13081420
|View full text |Cite
|
Sign up to set email alerts
|

Comparative Analysis of Machine Learning Techniques for Non-Intrusive Load Monitoring

Noman Shabbir,
Kristina Vassiljeva,
Hossein Nourollahi Hokmabad
et al.

Abstract: Non-intrusive load monitoring (NILM) has emerged as a pivotal technology in energy management applications by enabling precise monitoring of individual appliance energy consumption without the requirements of intrusive sensors or smart meters. In this technique, the load disaggregation for the individual device is accrued by the recognition of their current signals by employing machine learning (ML) methods. This research paper conducts a comprehensive comparative analysis of various ML techniques applied to N… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 53 publications
(63 reference statements)
0
1
0
Order By: Relevance
“…However, these models often require extensive computational resources, especially when handling large datasets or performing multi-step forecasting. For instance, SVMs can become computationally intensive with large training sets due to their quadratic optimization problem, while ensemble methods such as Random Forests involve training multiple decision trees, which increases computational time linearly with the number of trees and depth [75]. Considering the computational effort estimation in an LSTM network as an example, the time and energy for model training depend on both hardware and software, and the total number of model parameters serves as a useful surrogate.…”
Section: Computational Burden Of Load Forecasting Processmentioning
confidence: 99%
“…However, these models often require extensive computational resources, especially when handling large datasets or performing multi-step forecasting. For instance, SVMs can become computationally intensive with large training sets due to their quadratic optimization problem, while ensemble methods such as Random Forests involve training multiple decision trees, which increases computational time linearly with the number of trees and depth [75]. Considering the computational effort estimation in an LSTM network as an example, the time and energy for model training depend on both hardware and software, and the total number of model parameters serves as a useful surrogate.…”
Section: Computational Burden Of Load Forecasting Processmentioning
confidence: 99%