Abstract:In addition to the need for simultaneously optimizing several competing objectives, many real-world problems are also dynamic in nature. These problems are called dynamic multi-objective optimization problems. Applying evolutionary algorithms to solve dynamic optimization problems has obtained great attention among many researchers. However, most of works are restricted to the single-objective case. In this work, we propose an adaptive hybrid population management strategy using memory, local search and random… Show more
“…The helpful information offered by the archive can assist in handling neighbor sub-problems by cooperation. Azzouz et al proposed an adaptive strategy for managing hybrid populations with memory, local search and random strategies, to effectively tackle DMOPs, which guarantees a rapid convergence and good diversity [45]. Koo et al proposed a selective memory technique, which selects a partial retrieval based on the diversity in the decision space to maintain effective memories [46].…”
Dynamic interval multi-objective optimization problems (DI-MOPs) are very common in real-world applications. However, there are few evolutionary algorithms that are suitable for tackling DI-MOPs up to date. A framework of dynamic interval multi-objective cooperative co-evolutionary optimization based on the interval similarity is presented in this paper to handle DI-MOPs. In the framework, a strategy for decomposing decision variables is first proposed, through which all the decision variables are divided into two groups according to the interval similarity between each decision variable and interval parameters. Following that, two sub-populations are utilized to cooperatively optimize decision variables in the two groups. Furthermore, two response strategies, i.e., a strategy based on the change intensity and a random mutation strategy, are employed to rapidly track the changing Pareto front of the optimization problem. The proposed algorithm is applied to eight benchmark optimization instances as well as a multi-period portfolio selection problem and compared with five state-of-the-art evolutionary algorithms. The experimental results reveal that the proposed algorithm is very competitive on most optimization instances.
“…The helpful information offered by the archive can assist in handling neighbor sub-problems by cooperation. Azzouz et al proposed an adaptive strategy for managing hybrid populations with memory, local search and random strategies, to effectively tackle DMOPs, which guarantees a rapid convergence and good diversity [45]. Koo et al proposed a selective memory technique, which selects a partial retrieval based on the diversity in the decision space to maintain effective memories [46].…”
Dynamic interval multi-objective optimization problems (DI-MOPs) are very common in real-world applications. However, there are few evolutionary algorithms that are suitable for tackling DI-MOPs up to date. A framework of dynamic interval multi-objective cooperative co-evolutionary optimization based on the interval similarity is presented in this paper to handle DI-MOPs. In the framework, a strategy for decomposing decision variables is first proposed, through which all the decision variables are divided into two groups according to the interval similarity between each decision variable and interval parameters. Following that, two sub-populations are utilized to cooperatively optimize decision variables in the two groups. Furthermore, two response strategies, i.e., a strategy based on the change intensity and a random mutation strategy, are employed to rapidly track the changing Pareto front of the optimization problem. The proposed algorithm is applied to eight benchmark optimization instances as well as a multi-period portfolio selection problem and compared with five state-of-the-art evolutionary algorithms. The experimental results reveal that the proposed algorithm is very competitive on most optimization instances.
“…slow convergence and poor diversity, when the environment changes. As a result the authors in [1] proposed an adaptive hybrid population management strategy using memory, local search and random strategies to effectively handle environment dynamicity in DMOPs. The special feature of this algorithm is that it can adjust the number of memory and random solutions to be used according to the change severity.…”
One of the major distinguishing features of the Dynamic Multiobjective Optimization Problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-Optimal Front (POF) becomes a challenge. One of the promising solutions is reusing "experiences" to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the non-independent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called Tr-DMOEA, which integrates transfer learning and population-based evolutionary algorithms (EAs) to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known evolutionary algorithms, nondominated sorting genetic algorithm II (NSGA-II), multiojective particle swarm 1 arXiv:1612.06093v2 [cs.NE] 18 Nov 2017 optimization (MOPSO), and the regularity model-based multiobjective estimation of distribution algorithm (RM-MEDA). We employ twelve benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs.
“…It has been proved that when the optimal solution returns to the previous position repeatedly or the environment changes periodically, this algorithm will help save computing time and bias search process, thus becoming very efficient. In [15], Azzouz et al proposed an adaptive hybrid population management strategy, which is based on a technology that can measure the severity of environmental changes, then according to the technology it can adjust memory, local search (LS) and the number of random solutions.…”
Section: Related Workmentioning
confidence: 99%
“…Input: The Dynamic Multi-objective Optimaztion Function F (X); Output: P OSs: the POSs of F (X); 1 Randomly initiate a Population the P op 0 ; 2 P OS 0 =DMOEA(P op 0 ); 3 P OS s =P OS 0 ; 4 Train a SVM classifier SC S by using P g ∈ P OS 0 and N g / ∈ P OS 0 ; 5 Randomly generate solutions {xy 1 , · · · , xy p } of the function F (X) 1 ; 6 if xy i pass the recognition of the SVM SC S then 7 Put xy i into P op 1 8 end 9 P OS 1 =DMOEA(P op 1 ); 10 P OS t =P OS 1 ; 11 for t = 1 to n do 12 P SAM P LES t =P OS t ; 13 Train SC S by using P g ∈ P SAM P LES t and N g ∈ N SAM P LES t ; 14 Randomly generate solutions {xy 1 , · · · , xy p } of the function F (X) t+1 ; 15 if xy i pass the recognition of the SVM SC S then 16 Put xy i into P op t+1 17 end 18 P OS t+1 = DMOEA(P op t+1 ); 19 P OSs = P OSs ∪ P OS t+1 ; 20 end 21 return P OSs;…”
The main feature of the Dynamic Multi-objective Optimization Problems (DMOPs) is that optimization objective functions will change with times or environments. One of the promising approaches for solving the DMOPs is reusing the obtained Pareto optimal set (POS) to train prediction models via machine learning approaches. In this paper, we train an Incremental Support Vector Machine (ISVM) classifier with the past POS, and then the solutions of the DMOP we want to solve at the next moment are filtered through the trained ISVM classifier. A high-quality initial population will be generated by the ISVM classifier, and a variety of different types of population-based dynamic multi-objective optimization algorithms can benefit from the population. To verify this idea, we incorporate the proposed approach into three evolutionary algorithms, the multiobjective particle swarm optimization(MOPSO), nondominated sorting genetic algorithm II (NSGA-II), and the regularity Modelbased multi-objective estimation of distribution algorithm(RE-MEDA). We employ experiments to test these algorithms, and experimental results show the effectiveness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.