2019
DOI: 10.1109/tpwrs.2019.2909150
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Learning-Based Optimization for Distribution System State Estimation

Abstract: Distribution system state estimation (DSSE) is a core task for monitoring and control of distribution networks. Widely used algorithms such as Gauss-Newton perform poorly with the limited number of measurements typically available for DSSE, often require many iterations to obtain reasonable results, and sometimes fail to converge. DSSE is a non-convex problem, and working with a limited number of measurements further aggravates the situation, as indeterminacy induces multiple global (in addition to local) mini… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
77
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 148 publications
(79 citation statements)
references
References 47 publications
0
77
0
2
Order By: Relevance
“…The number of neurons representing each bus at the hidden layers is 48, 24, 12, and 6, respectively. Table II shows the average performance of the proposed physic-aware learning approach, the hybrid data-driven and optimization method [11] (SNN + G-N), and the Gauss-Newton (G-N) algorithm over 1000 cases of Scenario A. Simple feed-forward neural networks approaches require a lot of training data and computational resources.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The number of neurons representing each bus at the hidden layers is 48, 24, 12, and 6, respectively. Table II shows the average performance of the proposed physic-aware learning approach, the hybrid data-driven and optimization method [11] (SNN + G-N), and the Gauss-Newton (G-N) algorithm over 1000 cases of Scenario A. Simple feed-forward neural networks approaches require a lot of training data and computational resources.…”
Section: Resultsmentioning
confidence: 99%
“…It is often challenging to avoid exploding or vanishing gradients while training these feed-forward NNs, and thus the provided estimates are less accurate than any optimization-based approach. A joint optimization/learning approach was proposed in [11]. Since GN works very well when given a proper initialization, the key is to learn to initialize a Gauss-Newton solver.…”
mentioning
confidence: 99%
“…The supervised and transfer learning were applied in [41] to estimate the Pareto front that is made up with a series of initial values. This task was indeed more difficult than [38]- [40]. The numerical tests indicated that such estimation might cause large errors under specific conditions, so further validation and fine-tuning were extremely important.…”
Section: B Category 2 Optimization Option Selectionmentioning
confidence: 99%
“…Many researches use machine learning approaches to estimate a good initial value, which is beneficial for a warmstart algorithm. Reference [38] proposed a "learn to initialize" strategy to improve the Gauss-Newton algorithm. The authors achieved this with a neural network, and designed a special loss function (only penalizing the maximized errors) to improve the overall performance.…”
Section: B Category 2 Optimization Option Selectionmentioning
confidence: 99%