2018
DOI: 10.3390/en11010185
|View full text |Cite
|
Sign up to set email alerts
|

Feature Reduction for Power System Transient Stability Assessment Based on Neighborhood Rough Set and Discernibility Matrix

Abstract: In machine learning-based transient stability assessment (TSA) problems, the characteristics of the selected features have a significant impact on the performance of classifiers. Due to the high dimensionality of TSA problems, redundancies usually exist in the original feature space, which will deteriorate the performance of classification. To effectively eliminate redundancies and obtain the optimal feature set, a new feature reduction method based on neighborhood rough set and discernibility matrix is propos… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(8 citation statements)
references
References 34 publications
0
5
0
Order By: Relevance
“…When using the "combined feature quantity" method to construct the initial transient feature set, three aspects: the systematic principle, the mainstream principle and the real-time principle are usually considered [38,49,54], that is, the selected feature quantities must meet: 1) The scale of the selected feature quantity does not change with system changes, and should be the combined index of the state variables of each component in the system; 2) There is a high correlation between the selected feature quantity and the transient stable state; 3) The chosen feature quantity must be completed in a timely manner, and it must represent the state of the system before and after the fault occurs in order to fully comprehend the fault's effect on the system. According to the above three principles, on the basis of a large number of simulation experiments, and based on the study and summary of the existing literature [43][44][45]49,50,[54][55][56], the 32-dimensional transient characteristic quantities were determined.…”
Section: Transient Feature Setmentioning
confidence: 99%
“…When using the "combined feature quantity" method to construct the initial transient feature set, three aspects: the systematic principle, the mainstream principle and the real-time principle are usually considered [38,49,54], that is, the selected feature quantities must meet: 1) The scale of the selected feature quantity does not change with system changes, and should be the combined index of the state variables of each component in the system; 2) There is a high correlation between the selected feature quantity and the transient stable state; 3) The chosen feature quantity must be completed in a timely manner, and it must represent the state of the system before and after the fault occurs in order to fully comprehend the fault's effect on the system. According to the above three principles, on the basis of a large number of simulation experiments, and based on the study and summary of the existing literature [43][44][45]49,50,[54][55][56], the 32-dimensional transient characteristic quantities were determined.…”
Section: Transient Feature Setmentioning
confidence: 99%
“…However, the model contains nonlinear differential-algebraic equations, which are complicated and takes a long time to calculate, which is difficult to meet the requirements of online calculation (Liu et al, 2020(Liu et al, , 2019Alsafasfeh et al, 2019c,b). Artificial intelligence algorithms can establish the mapping relationship between data input and output through learning, and the calculation speed is fast, so it is used for transient stability assessment and avoid complex time-domain equation solving (Alsafasfeh et al, 2019a;Shakerighadi et al, 2020;Kang et al, 2017;Bhui and Senroy, 2017;Shiwei et al, 2019;Yousefian et al, 2017;Li et al, 2018a;Shetye et al, 2016). Literature (Hu et al, 2019) uses data preprocessing algorithms such as feature variable selection, cluster analysis, and maximum entropy discrete method to reduce the data dimension and then applies the association classification method for temporary stability evaluation.…”
Section: Introductionmentioning
confidence: 99%
“…Rough set theory has become an efficient mathematical tool for attribute reduction to discover data dependencies and reduce redundancy attributes contained in data sets [ 13 , 14 ]. Gu et al [ 15 ] proposed a kernelized fuzzy rough set, but the result is critically depending on setting control parameters and the design of objective function.…”
Section: Introductionmentioning
confidence: 99%
“…Mu et al [ 24 ] investigated a gene selection method using Fisher transformation based on neighborhood rough sets for numerical data sets. Li et al [ 14 ] developed a feature reduction method based on neighborhood rough sets and discernibility matrix, but there is a hypothesis that all features data are available. Nonetheless, since the global neighborhood in this field is only used to deal with decision systems, that is, each sample uses the same neighborhood value in different conditional attribute combinations; this method has a high time complexity and does not result in the optimal δ value [ 24 ].…”
Section: Introductionmentioning
confidence: 99%