2017 IEEE MIT Undergraduate Research Technology Conference (URTC) 2017
DOI: 10.1109/urtc.2017.8284200
|View full text |Cite
|
Sign up to set email alerts
|

Current peak based device classification in NILM on a low-cost embedded platform using extra-trees

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(4 citation statements)
references
References 9 publications
0
0
2
Order By: Relevance
“…The results obtained by processing the data available on the public BLUED dataset appeared very encouraging. The value obtained for the F1-score was 99.8%, which is higher than that obtained with other systems using the same dataset such as those proposed in [39] (91.5%) and [40] (93.2%).…”
Section: Conclusion and Final Remarkscontrasting
confidence: 65%
“…The results obtained by processing the data available on the public BLUED dataset appeared very encouraging. The value obtained for the F1-score was 99.8%, which is higher than that obtained with other systems using the same dataset such as those proposed in [39] (91.5%) and [40] (93.2%).…”
Section: Conclusion and Final Remarkscontrasting
confidence: 65%
“…The greatest difficulty encountered in the classification phase with the BLUED dataset is attributable to the significantly greater number of devices that the network is required to recognize, compared to those used for the acquired measurements. Furthermore, the value obtained for the F1-score is higher than that obtained with other systems using the same dataset, as those proposed in [36] (0.915) and [37] (0.932).…”
Section: Conclusion and Final Remarkscontrasting
confidence: 49%
“…The proposed system outperforms all three systems, as when all training data were provided (20 training samples for each scenario), the F1-Score achieved was 99.0%. It is important to note that the systems proposed in previous studies [62,63] were outperformed, even when the system was trained with the minimum number of samples when the system performance was 94.0%. The system is designed for local operation and is thus oriented toward edge implementation.…”
Section: Conclusion and Final Remarksmentioning
confidence: 90%
“…The proposed system demonstrated excellent performance, even when trained with a minimum number of samples. In order to provide a comparison against other pre-published literature in the field, works that used similar metrics [62][63][64] were considered. The performances achieved by the cited works, by evaluating the F1-Score, were 91.5%, 93.2%, and 98.0%, respectively.…”
Section: Conclusion and Final Remarksmentioning
confidence: 99%