This book presents an interesting sample of the latest advances in optimization techniques applied to electrical power engineering. It covers a variety of topics from various fields, ranging from ...classical optimization such as Linear and Nonlinear Programming and Integer and Mixed-Integer Programming to the most modern methods based on bio-inspired metaheuristics. The featured papers invite readers to delve further into emerging optimization techniques and their real application to case studies such as conventional and renewable energy generation, distributed generation, transport and distribution of electrical energy, electrical machines and power electronics, network optimization, intelligent systems, advances in electric mobility, etc.
This book presents an interesting sample of the latest advances in optimization techniques applied to electrical power engineering. It covers a variety of topics from various fields, ranging from ...classical optimization such as Linear and Nonlinear Programming and Integer and Mixed-Integer Programming to the most modern methods based on bio-inspired metaheuristics. The featured papers invite readers to delve further into emerging optimization techniques and their real application to case studies such as conventional and renewable energy generation, distributed generation, transport and distribution of electrical energy, electrical machines and power electronics, network optimization, intelligent systems, advances in electric mobility, etc.
One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal ...front becomes a challenge. One of the promising solutions is reusing "experiences" to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs.
A Multi-Facet Survey on Memetic Computation Chen, Xianshun; Ong, Yew-Soon; Lim, Meng-Hiot ...
IEEE transactions on evolutionary computation,
2011-Oct., 2011-10-00, 20111001, Letnik:
15, Številka:
5
Journal Article
Recenzirano
Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of ...potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.
The Arithmetic Optimization Algorithm Abualigah, Laith; Diabat, Ali; Mirjalili, Seyedali ...
Computer methods in applied mechanics and engineering,
04/2021, Letnik:
376
Journal Article
Recenzirano
Odprti dostop
This work proposes a new meta-heuristic method called Arithmetic Optimization Algorithm (AOA) that utilizes the distribution behavior of the main arithmetic operators in mathematics including ...(Multiplication (M), Division (D), Subtraction (S), and Addition (A)). AOA is mathematically modeled and implemented to perform the optimization processes in a wide range of search spaces. The performance of AOA is checked on twenty-nine benchmark functions and several real-world engineering design problems to showcase its applicability. The analysis of performance, convergence behaviors, and the computational complexity of the proposed AOA have been evaluated by different scenarios. Experimental results show that the AOA provides very promising results in solving challenging optimization problems compared with eleven other well-known optimization algorithms. Source codes of AOA are publicly available at and .
This paper proposes a hybrid optimization technique combining genetic and exchange market algorithms. These algorithms are two evolutionary algorithms that facilitate finding optimal solutions for ...different optimization problems. The genetic algorithm's high execution time decreases its efficiency. Because of the genetic algorithm's strength in surveying solution space, it can be combined with a proper exploitation-based algorithm to improve the optimization efficiency. The exchange market algorithm is an optimization algorithm that can effectively find the global optimum of the objective functions in an efficient manner. According to the trade's inherent situation, the stock market works under unbalanced and balanced modes. In order to gain maximum profit, shareholders take specific decisions based on the existing conditions. The exchange market algorithm has two searching and two absorbent operators for acquiring the best-simulated form of the stock market. Simulations on twelve benchmarks with the different dimensions and variables prove the effectiveness of this algorithm compared to eight optimization algorithms.
Sparse representation has attracted much attention from researchers in fields of signal processing, image processing, computer vision, and pattern recognition. Sparse representation also has a good ...reputation in both theoretical research and practical applications. Many different algorithms have been proposed for sparse representation. The main purpose of this paper is to provide a comprehensive study and an updated review on sparse representation and to supply guidance for researchers. The taxonomy of sparse representation methods can be studied from various viewpoints. For example, in terms of different norm minimizations used in sparsity constraints, the methods can be roughly categorized into five groups: 1) sparse representation with l 0 -norm minimization; 2) sparse representation with lp-norm (0 <; p <; 1) minimization; 3) sparse representation with l 1 -norm minimization; 4) sparse representation with l 2 ,1-norm minimization; and 5) sparse representation with l2-norm minimization. In this paper, a comprehensive overview of sparse representation is provided. The available sparse representation algorithms can also be empirically categorized into four groups: 1) greedy strategy approximation; 2) constrained optimization; 3) proximity algorithm-based optimization; and 4) homotopy algorithm-based sparse representation. The rationales of different algorithms in each category are analyzed and a wide range of sparse representation applications are summarized, which could sufficiently reveal the potential nature of the sparse representation theory. In particular, an experimentally comparative study of these sparse representation algorithms was presented.
Traditionally, evolutionary algorithms (EAs) have been systematically developed to solve mono-, multi-, and many-objective optimization problems, in this order. Despite some efforts in unifying ...different types of mono-objective evolutionary and non-EAs, researchers are not interested enough in unifying all three types of optimization problems together. Such a unified algorithm will allow users to work with a single software enabling one-time implementation of solution representation, operators, objectives, and constraints formulations across several objective dimensions. For the first time, we propose a unified evolutionary optimization algorithm for solving all three classes of problems specified above, based on the recently proposed elitist, guided nondominated sorting procedure, developed for solving many-objectives problems. Using a new niching-based selection procedure, our proposed unified algorithm automatically degenerates to an efficient equivalent population-based algorithm for each class. No extra parameters are needed. Extensive simulations are performed on unconstrained and constrained test problems having single-, two-, multi-, and many-objectives and on two engineering optimization design problems. Performance of the unified approach is compared to suitable population-based counterparts at each dimensional level. Results amply demonstrate the merit of our proposed unified approach and motivate similar studies for a richer understanding of the development of optimization algorithms.
The test case generation is intrinsically a multi-objective problem, since the goal is covering multiple test targets (e.g., branches). Existing search-based approaches either consider one target at ...a time or aggregate all targets into a single fitness function (whole-suite approach). Multi and many-objective optimisation algorithms (MOAs) have never been applied to this problem, because existing algorithms do not scale to the number of coverage objectives that are typically found in real-world software. In addition, the final goal for MOAs is to find alternative trade-off solutions in the objective space, while in test generation the interesting solutions are only those test cases covering one or more uncovered targets. In this paper, we present Dynamic Many-Objective Sorting Algorithm (DynaMOSA), a novel many-objective solver specifically designed to address the test case generation problem in the context of coverage testing. DynaMOSA extends our previous many-objective technique Many-Objective Sorting Algorithm (MOSA) with dynamic selection of the coverage targets based on the control dependency hierarchy. Such extension makes the approach more effective and efficient in case of limited search budget. We carried out an empirical study on 346 Java classes using three coverage criteria (i.e., statement, branch, and strong mutation coverage) to assess the performance of DynaMOSA with respect to the whole-suite approach (WS), its archive-based variant (WSA) and MOSA. The results show that DynaMOSA outperforms WSA in 28 percent of the classes for branch coverage (+8 percent more coverage on average) and in 27 percent of the classes for mutation coverage (+11 percent more killed mutants on average). It outperforms WS in 51 percent of the classes for statement coverage, leading to +11 percent more coverage on average. Moreover, DynaMOSA outperforms its predecessor MOSA for all the three coverage criteria in 19 percent of the classes with +8 percent more code coverage on average.
Grey wolf optimizer (GWO) is a very efficient metaheuristic inspired by the hierarchy of the Canis lupus wolves. It has been extensively employed to a variety of practical applications. Crow search ...algorithm (CSA) is a recently proposed metaheuristic algorithm, which mimics the intellectual conduct of crows. In this paper, a hybrid GWO with CSA, namely GWOCSA is proposed, which combines the strengths of both the algorithms effectively with the aim to generate promising candidate solutions in order to achieve global optima efficiently. In order to validate the competence of the proposed hybrid GWOCSA, a widely utilized set of 23 benchmark test functions having a wide range of dimensions and varied complexities is used in this paper. The results obtained by the proposed algorithm are compared to 10 other algorithms in this paper for verification. The statistical results demonstrate that the GWOCSA outperforms other algorithms, including the recent variants of GWO called, enhanced grey wolf optimizer (EGWO) and augmented grey wolf optimizer (AGWO) in terms of high local optima avoidance ability and fast convergence speed. Furthermore, in order to demonstrate the applicability of the proposed algorithm at solving complex real-world problems, the GWOCSA is also employed to solve the feature selection problem as well. The GWOCSA as a feature selection approach is tested on 21 widely employed data sets acquired from the University of California at Irvine repository. The experimental results are compared to the state-of-the-art feature selection techniques, including the native GWO, the EGWO, and the AGWO. The results reveal that the GWOCSA has comprehensive superiority in solving the feature selection problem, which proves the capability of the proposed algorithm in solving real-world complex problems.