Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2018
DOI: 10.1109/jsyst.2016.2573799
|View full text |Cite
|
Sign up to set email alerts
|

Integrated Strategies of Backtracking Search Optimizer for Solving Reactive Power Dispatch Problem

Abstract: This paper proposes backtracking search optimizer (BSO) for solving the reactive power dispatch (RPD) problem. The RPD problem is highly nonlinear, nonconvex optimization problem and is consisting of both continuous and discrete control variables. It aims to find the optimal settings of the generator voltages, tap positions of tap changing transformers, and the amount of reactive compensation that able to optimize the transmission power losses. BSO has simple structure and single control parameter. It has two … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
33
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 58 publications
(38 citation statements)
references
References 38 publications
0
33
0
Order By: Relevance
“…Due to the powerful ability of MOEAs to find widely distributed POSs by only one simulation run, MOEAs are widely used to solve the MORPD model. While MOEAs include enormous algorithms such as water cycle algorithm (NGBWC) [14], backtracking search optimizer (BSO) [15], whale optimization algorithm (WOA) [16], and grey wolf optimizer (GWO) [17], etc. Furthermore, for better performance, some improvements have been made based on the original algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…Due to the powerful ability of MOEAs to find widely distributed POSs by only one simulation run, MOEAs are widely used to solve the MORPD model. While MOEAs include enormous algorithms such as water cycle algorithm (NGBWC) [14], backtracking search optimizer (BSO) [15], whale optimization algorithm (WOA) [16], and grey wolf optimizer (GWO) [17], etc. Furthermore, for better performance, some improvements have been made based on the original algorithms.…”
Section: Introductionmentioning
confidence: 99%
“…ORPD problem is a non-convex, complex, and non-linear optimization problem. Thus, many efforts have been introduced for solving the ORPD by applying numerous optimization techniques including the Backtracking Search Optimizer (BSO) [2], Particle Swarm Optimization (PSO) [3], Ant Lion Optimizer (ALO) [4], Improved Ant Lion Optimization algorithm (IALO) [5] , Whale Optimization Algorithm (WOA) [6], Improved Social Spider Optimization Algorithm (ISSO) [7], Differential Evolution (DE) [8], Moth Swarm Algorithm (MSA) [9], Evolutionary Algorithm (EA) [10], Modified Differential Evolution (MDE) [11], Jaya Algorithm (JA) [12], Modified Sine Cosine Algorithm (MSCA) [13], Lightning Attachment Procedure Optimization (LAPO) [14], Gravitational Search Algorithm (GSA) [15], Biogeography-Based Optimization (BBO) [16], Teaching Learning Based Optimization (TLBO) [17], Harmony Search Algorithm (HAS) [17], Grey Wolf Optimizer (GWO) [18], Comprehensive Learning Particle Swarm Optimization (CLPSO) [19], Chemical Reaction Optimization (CRO) [20], Improved Gravitational Search Algorithm (IGSA) [21], Improved Pseudo-Gradient Search Particle Swarm Optimization (IPG-PSO) [22], Firefly Algorithm (FA) [23], Fractional Particle Swarm Optimization Gravitational Search Algorithm [24], hybrid GWO-PSO optimization [25], Oppositional Salp Swarm Algorithm (OSSA) [26], diversity-enhanced particle swarm optimization (DEPSO) [27].…”
Section: Introductionmentioning
confidence: 99%
“…The former includes annealing and fuzzy clustering [2], the evolution approach [3], genetic algorithms (GAs) [4], krill herd algorithms [5], particle swarm optimization (PSO) [6], seeker algorithms [7], and whale optimization [8]. The search algorithms contain backtracking [9], cuckoo [10], differential [11], direct [12], and harmony search methods [13]. The optimization algorithms (as conventional methods) involve decomposition methods [1], dynamic programming [14], gradient-based optimization [15], interior-point methods [16], linear approximation [17], mixed integer programming [18,19], quadratic programming [20], teaching-based learning [21], and decision-making algorithms [22].…”
Section: Introductionmentioning
confidence: 99%