Complex continuous optimization problems widely exist nowadays due to the fast development of the economy and society. Moreover, the technologies like Internet of things, cloud computing, and big ...data also make optimization problems with more challenges including
M
any-dimensions,
M
any-changes,
M
any-optima,
M
any-constraints, and
M
any-costs. We term these as 5-M challenges that exist in large-scale optimization problems, dynamic optimization problems, multi-modal optimization problems, multi-objective optimization problems, many-objective optimization problems, constrained optimization problems, and expensive optimization problems in practical applications. The evolutionary computation (EC) algorithms are a kind of promising global optimization tools that have not only been widely applied for solving traditional optimization problems, but also have emerged booming research for solving the above-mentioned complex continuous optimization problems in recent years. In order to show how EC algorithms are promising and efficient in dealing with the 5-M complex challenges, this paper presents a comprehensive survey by proposing a novel taxonomy according to the function of the approaches, including
reducing problem difficulty
,
increasing algorithm diversity
,
accelerating convergence speed
,
reducing running time
, and
extending application field
. Moreover, some future research directions on using EC algorithms to solve complex continuous optimization problems are proposed and discussed. We believe that such a survey can draw attention, raise discussions, and inspire new ideas of EC research into complex continuous optimization problems and real-world applications.
Particle swarm optimization (PSO) relies on its learning strategy to guide its search direction. Traditionally, each particle utilizes its historical best experience and its neighborhood's best ...experience through linear summation. Such a learning strategy is easy to use, but is inefficient when searching in complex problem spaces. Hence, designing learning strategies that can utilize previous search information (experience) more efficiently has become one of the most salient and active PSO research topics. In this paper, we proposes an orthogonal learning (OL) strategy for PSO to discover more useful information that lies in the above two experiences via orthogonal experimental design. We name this PSO as orthogonal learning particle swarm optimization (OLPSO). The OL strategy can guide particles to fly in better directions by constructing a much promising and efficient exemplar. The OL strategy can be applied to PSO with any topological structure. In this paper, it is applied to both global and local versions of PSO, yielding the OLPSO-G and OLPSO-L algorithms, respectively. This new learning strategy and the new algorithms are tested on a set of 16 benchmark functions, and are compared with other PSO algorithms and some state of the art evolutionary algorithms. The experimental results illustrate the effectiveness and efficiency of the proposed learning strategy and algorithms. The comparisons show that OLPSO significantly improves the performance of PSO, offering faster global convergence, higher solution quality, and stronger robustness.
There are two common challenges in particle swarm optimization (PSO) research, that is, selecting proper exemplars and designing an efficient learning model for a particle. In this article, we ...propose a triple archives PSO (TAPSO), in which particles in three archives are used to deal with the above two challenges. First, particles who have better fitness (i.e., elites) are recorded in one archive while other particles who offer faster progress, called profiteers in this article, are saved in another archive. Second, when breeding each dimension of a potential exemplar for a particle, we choose a pair of elite and profiteer from corresponding archives as two parents to generate the dimension value by ordinary genetic operators. Third, each particle carries out a specific learning model according to the fitness of its potential exemplars. Furthermore, there is no acceleration coefficient in TAPSO aiming to simplify the learning models. Finally, if an exemplar has excellent performance, it will be regarded as an outstanding exemplar and saved in the third archive, which can be reused by inferior particles aiming to enhance the exploitation and to save computing resources. The experimental results and comparisons between TAPSO and other eight PSOs on 30 benchmark functions and four real applications suggest that TAPSO attains very promising performance in different types of functions, contributing to both higher solution accuracy and faster convergence speed. Furthermore, the effectiveness and efficiency of these new proposed strategies are discussed based on extensive experiments.
Locating more peaks and refining the solution accuracy on the found peaks are two challenging issues in solving multimodal optimization problems (MMOPs). To deal with these two challenges, a ...distributed individuals differential evolution (DIDE) algorithm is proposed in this article based on a distributed individuals for multiple peaks (DIMP) framework and two novel mechanisms. First, the DIMP framework provides sufficient diversity by letting each individual act as a distributed unit to track a peak. Based on the DIMP framework, each individual uses a virtual population controlled by an adaptive range adjustment strategy to explore the search space sufficiently for locating a peak and then gradually approach it. Second, the two novel mechanisms named lifetime mechanism and elite learning mechanism (ELM) cooperate with the DIMP framework. The lifetime mechanism is inspired by the natural phenomenon that every organism will gradually age and has a limited lifespan. When an individual runs out of its lifetime and also has good fitness, it is regarded as an elite solution and will be added to an archive. Then the individual restarts a new lifetime, so as to bring further diversity to locate more peaks. The ELM is proposed to refine the accuracy of those elite solutions in the archive, being efficient in dealing with the solution accuracy issue on the found peaks. The experimental results on 20 multimodal benchmark test functions show that the proposed DIDE algorithm has generally better or competitive performance compared with the state-of-the-art multimodal optimization algorithms.
•An entire population is divided into many parallel evolved sub-swarms in the early stage.•A dynamic sub-swarm number strategy (DNS) periodically reduces the number of sub-swarms aiming to balance ...the exploration and the exploitation ability.•A sub-swarm regrouping strategy (SRS) regrouping these sub-swarms based on the stagnancy information of the globally best position is adopted to enhance the exploitation ability.•A purposeful detecting strategy (PDS) relying on some historical information of the search process is selected to help the population to jump out of the current local optimum for better exploration ability.•The strategies proposed in this paper have general applicability.
This paper proposes a multi-swarm particle swarm optimization (MSPSO) that consists of three novel strategies to balance the exploration and exploitation abilities. The new proposed MSPSO in this work is based on multiple swarms framework cooperating with the dynamic sub-swarm number strategy (DNS), sub-swarm regrouping strategy (SRS), and purposeful detecting strategy (PDS). Firstly, the DNS divides the entire population into many sub-swarms in the early stage and periodically reduces the number of sub-swarms (i.e., increase the size of each sub-swarm) along with the evolutionary process. This is helpful for balancing the exploration ability early and the exploitation ability late, respectively. Secondly, in each DNS period with special number of sub-swarms, the SRS is to regroup these sub-swarms based on the stagnancy information of the global best position. This is helpful for diffusing and sharing the search information among different sub-swarms to enhance the exploitation ability. Thirdly, the PDS is relying on some historical information of the search process to detect whether the population has been trapped into a potential local optimum, so as to help the population jump out of the current local optimum for better exploration ability. The comparisons among MSPSO and other 13 peer algorithms on the CEC2013 test suite and 4 real applications suggest that MSPSO is a very reliable and highly competitive optimization algorithm for solving different types of functions. Furthermore, the extensive experimental results illustrate the effectiveness and efficiency of the three proposed strategies used in MSPSO.
The application of multiobjective evolutionary algorithms to many-objective optimization problems often faces challenges in terms of diversity and convergence. On the one hand, with a limited ...population size, it is difficult for an algorithm to cover different parts of the whole Pareto front (PF) in a large objective space. The algorithm tends to concentrate only on limited areas. On the other hand, as the number of objectives increases, solutions easily have poor values on some objectives, which can be regarded as poor bottleneck objectives that restrict solutions' convergence to the PF. Thus, we propose a coevolutionary particle swarm optimization with a bottleneck objective learning (BOL) strategy for many-objective optimization. In the proposed algorithm, multiple swarms coevolve in distributed fashion to maintain diversity for approximating different parts of the whole PF, and a novel BOL strategy is developed to improve convergence on all objectives. In addition, we develop a solution reproduction procedure with both an elitist learning strategy (ELS) and a juncture learning strategy (JLS) to improve the quality of archived solutions. The ELS helps the algorithm to jump out of local PFs, and the JLS helps to reach out to the missing areas of the PF that are easily missed by the swarms. The performance of the proposed algorithm is evaluated using two widely used test suites with different numbers of objectives. Experimental results show that the proposed algorithm compares favorably with six other state-of-the-art algorithms on many-objective optimization.
The key challenge of expensive optimization problems (EOP) is that evaluating the true fitness value of the solution is computationally expensive. A common method to deal with this issue is to seek ...for a less expensive surrogate model to replace the original expensive objective function. However, this method also brings in model approximation error. To efficiently solve the EOP, a novel scale-adaptive fitness evaluation (SAFE) method is proposed in this article to directly evaluate the true fitness value of the solution on the original objective function. To reduce the computational cost, the SAFE method uses a set of evaluation methods (EM) with different accuracy scales to cooperatively complete the fitness evaluation process. The basic idea is to adopt the low-accuracy scale EM to fast locate promising regions and utilize the high-accuracy scale EM to refine the solution accuracy. To this aim, two EM switch strategies are proposed in the SAFE method to adaptively control the multiple EMs according to different evolutionary stages and search requirements. Moreover, a neighbor best-based evaluation (NBE) strategy is also put forward to evaluate the solution according to its nearest high-quality evaluated solution, which can further reduce computational cost. Extensive experiments are carried out on the case study of crowdshipping scheduling problem in the smart city to verify the effectiveness and efficiency of the proposed SAFE method, and to investigate the effects of the two EM switch strategies and the NBE strategy. Experimental results show that the proposed SAFE method achieves better solution quality than some baseline and state-of-the-art algorithms, indicating an efficient method for solving EOP with a better balance between solution accuracy and computational cost.
Adaptive Particle Swarm Optimization Zhi-Hui Zhan; Jun Zhang; Yun Li ...
IEEE transactions on systems, man and cybernetics. Part B, Cybernetics,
12/2009, Letnik:
39, Številka:
6
Journal Article
Recenzirano
Odprti dostop
An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search ...over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration , exploitation , convergence , and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity.
Due to the increasing complexity of optimization problems, distributed differential evolution (DDE) has become a promising approach for global optimization. However, similar to the centralized ...algorithms, DDE also faces the difficulty of strategies' selection and parameters' setting. To deal with such problems effectively, this article proposes an adaptive DDE (ADDE) to relieve the sensitivity of strategies and parameters. In ADDE, three populations called exploration population, exploitation population, and balance population are co-evolved concurrently by using the master-slave multipopulation distributed framework. Different populations will adaptively choose their suitable mutation strategies based on the evolutionary state estimation to make full use of the feedback information from both individuals and the whole corresponding population. Besides, the historical successful experience and best solution improvement are collected and used to adaptively update the individual parameters (amplification factor <inline-formula> <tex-math notation="LaTeX">{F} </tex-math></inline-formula> and crossover rate CR) and population parameter (population size <inline-formula> <tex-math notation="LaTeX">{N} </tex-math></inline-formula>), respectively. The performance of ADDE is evaluated on all 30 widely used benchmark functions from the CEC 2014 test suite and all 22 widely used real-world application problems from the CEC 2011 test suite. The experimental results show that ADDE has great superiority compared with the other state-of-the-art DDE and adaptive differential evolution variants.
There are two phenomena in human society and biological systems. One is that people prefer to extract knowledge from multiple exemplars to obtain better learning ability. The other one is the ...forgetting ability that helps the encoding and consolidation of new information by removing unused or unwanted memories. Inspired by these phenomena, this paper transplants the multi-exemplar and forgetting ability to particle swarm optimization (PSO), and proposes an eXpanded PSO, called XPSO. Firstly, XPSO expands the “social-learning” part of each particle from one exemplar to two exemplars, learning from both the locally and the globally best exemplars. Secondly, XPSO assigns different forgetting abilities to different particles, simulating the forgetting phenomenon in the human society. Under the multi-exemplar learning model with forgetting ability, XPSO further adopts an adaptive scheme to update the acceleration coefficients and selects a reselection mechanism to update the population topology. The effectiveness of these additional proposed strategies is verified by extensive experiments. Moreover, comparison results among XPSO and other 9 popular PSO as well as 3 non-PSO algorithms on CEC’13 test suite suggest that XPSO attains a very promising performance for solving different types of functions, contributing to both higher solution accuracy and faster convergence speed.