Scalable multi-objective optimization test problems Deb, K.; Thiele, L.; Laumanns, M. ...
Proceedings of the 2002 Congress on Evolutionary Computation. CEC'02 (Cat. No.02TH8600),
2002, Letnik:
1
Conference Proceeding
After adequately demonstrating the ability to solve different two-objective optimization problems, multi-objective evolutionary algorithms (MOEAs) must show their efficacy in handling problems having ...more than two objectives. In this paper, we suggest three different approaches for systematically designing test problems for this purpose. The simplicity of construction, scalability to any number of decision variables and objectives, knowledge of exact shape and location of the resulting Pareto-optimal front, and ability to control difficulties in both converging to the true Pareto-optimal front and maintaining a widely distributed set of solutions are the main features of the suggested test problems. Because of these features, they should be useful in various research activities on MOEAs, such as testing the performance of a new MOEA, comparing different MOEAs, and having a better understanding of the working principles of MOEAs.
In this work we compare several new computational approaches to an inventory routing problem, in which a single product is shipped from a warehouse to retailers via an uncapacitated vehicle. We ...survey exact algorithms for the Traveling Salesman Problem (TSP) and its relaxations in the literature for the routing component. For the inventory control component we survey classical mixed integer linear programming and shortest path formulations for inventory models. We present a numerical study comparing combinations of the two empirically in terms of cost and solution time.
This paper addresses a production planning setting for pharmaceutical companies under the risk of failing quality inspections that are undertaken by the regulatory authorities to ensure good ...manufacturing practices. A staged decision model is proposed where the regulatory authority is considered an adversary with limited inspection budget, and the chosen inspections themselves have uncertain outcomes. Compact formulations for the expected revenue and the worst-case revenue as risk measures are given as well as a proof that the simplest version of the production planning problem under uncertainty is NP-complete. Some computational results are given to demonstrate the performance of the different formulations.
This paper presents the first convergence result for random search algorithms to a subset of the Pareto set of given maximum size k with bounds on the approximation quality. The core of the algorithm ...is a new selection criterion based on a hypothetical multilevel grid on the objective space. It is shown that, when using this criterion for accepting new search points, the sequence of solution archives converges with probability one to a subset of the Pareto set that epsilon-dominates the entire Pareto set. The obtained approximation quality epsilon is equal to the size of the grid cells on the finest level of resolution that allows an approximation with at most k points within the family of grids considered. While the convergence result is of general theoretical interest, the archiving algorithm might be of high practical value for any type iterative multiobjective optimization method, such as evolutionary algorithms or other metaheuristics, which all rely on the usage of a finite on-line memory to store the best solutions found so far as the current approximation of the Pareto set.
Modeling decision-dependent scenario probabilities in stochastic programs is difficult and typically leads to large and highly non-linear MINLPs that are very difficult to solve. In this paper, we ...develop a new approach to obtain a compact representation of the recourse function using a set of binary decision diagrams (BDDs) that encode a nested cover of the scenario set. The resulting BDDs can then be used to efficiently characterize the decision-dependent scenario probabilities by a set of linear inequalities, which essentially factorizes the probability distribution and thus allows to reformulate the entire problem as a small mixed-integer linear program. The approach is applicable to a large class of stochastic programs with multivariate binary scenario sets, such as stochastic network design, network reliability, or stochastic network interdiction problems. Computational results show that the BDD-based scenario representation reduces the problem size, and hence the computation time, significant compared to previous approaches.
This paper shows how to compute optimal control policies for a certain class of supply networks via a combination of stochastic dynamic programming and parametric programming.We consider supply ...networks where the dynamics of the material and information flows within the entire network can be expressed by a system of first-order difference equations and where some inputs to the system act as external disturbances. Assuming piecewise linear costs on state and control inputs, optimal control policies are computed for a risk-neutral objective function using the expected cost and for a risk-averse objective function using the worst-case cost. The obtained closed-loop control policies are piecewise-affine and continuous functions of the state variables, representing as a generalization of the common order-up-to policies. The optimal value functions are piecewise affine and convex, which is the essential structural property to allow for the solution via a sequence of parametric linear programs. Some numerical results are given on an example network with two suppliers with different costs and lead times.
In the classical s-t network reliability problem a fixed network G is given including two designated vertices s and t (called terminals). The edges are subject to independent random failure, and the ...task is to compute the probability that s and t are connected in the resulting network, which is known to be #P-complete. In this paper we are interested in approximating the s-t reliability in case of a directed acyclic original network G. We introduce and analyze a specialized version of the Monte-Carlo algorithm given by Karp and Luby. For the case of uniform edge failure probabilities, we give a worst-case bound on the number of samples that have to be drawn to obtain an epsilon-delta approximation, being sharper than the original upper bound. We also derive a variance reduction of the estimator which reduces the expected number of iterations to perform to achieve the desired accuracy when applied in conjunction with different stopping rules. Initial computational results on two types of random networks (directed acyclic Delaunay graphs and a slightly modified version of a classical random graph) with up to one million vertices are presented. These results show the advantage of the introduced Monte-Carlo approach compared to direct simulation when small reliabilities have to be estimated and demonstrate its applicability on large-scale instances.