2018 IEEE 2nd International Conference on Dielectrics (ICD) 2018
DOI: 10.1109/icd.2018.8514789
|View full text |Cite
|
Sign up to set email alerts
|

Diagnosis of Power Transformer Oil Using KNN and Nave Bayes Classifiers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 8 publications
0
10
0
1
Order By: Relevance
“…In this regard, various research scholars have undertaken several studies to explore novel approaches of automated faultfinding on the electrical grid. For instance in [24], Benmahamed et al evaluated the kNN algorithm and Naïve Bayes to diagnose transformer oil insulation through the analysis of DGA data. Five input vectors namely the Duval triangle reports, Dornrenberg ratios, Rogers ratios, DGA data (in percentages and ppm) were used in the study to map to five output classes.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In this regard, various research scholars have undertaken several studies to explore novel approaches of automated faultfinding on the electrical grid. For instance in [24], Benmahamed et al evaluated the kNN algorithm and Naïve Bayes to diagnose transformer oil insulation through the analysis of DGA data. Five input vectors namely the Duval triangle reports, Dornrenberg ratios, Rogers ratios, DGA data (in percentages and ppm) were used in the study to map to five output classes.…”
Section: Related Workmentioning
confidence: 99%
“…where m, n are two observations in the Euclidean space. The k-NN algorithm is easy to implement but performs poorly as the number of predictor variables increase [24].…”
mentioning
confidence: 99%
“…For example, k=3 expresses class 1 because only two squares and only one triangle exist. By contrast, k=5 refers to class 2 because the number of samples is five, which consists of three triangles and two squares [40]. The different metric distance methods used for KN are Euclidean, city block, Chebyshev, Minkowski (cubic), Mahalanobis, cosine, Spearman correlation, Hamming, and Jaccard.…”
Section: K-nearest Neighbor (Knn) Classifiermentioning
confidence: 99%
“…Para melhorar o desempenho do KNN existem diversas propostas de técnicas, como por exemplo, ponderar a distância (Gou et al, 2012), ponderar os K vizinhos mais próximos, usar conjunto de dados sintéticos ou usar o classificador médio Cmeans (Sarma et al, 2011). Aplicações que envolvem DGA, podem ser encontradas em Benmahamed et al (2018) e Samirmi et al (2013) .…”
Section: K Vizinhos Mais Próximos -Knnunclassified