•Introducing a scalable and efficient integer linear programming model for the Precedence-Constrained Minimum-Cost Arborescence problem (PCMCA).•Introducing the Precedence-Constrained Minimum-Cost ...Arborescence problem with Waiting Times (PCMCA-WT) as a new variation of the Minimum-Cost Arborescence problem.•Proving that both the PCMCA and the PCMCA-WT are NP-hard.
The minimum-cost arborescence problem is a well-studied problem in the area of graph theory, with known polynomial-time algorithms for solving it. Previous literature introduced new variations on the original problem with different objective function and/or constraints. Recently, the Precedence-Constrained Minimum-Cost Arborescence problem was proposed, in which precedence constraints are enforced on pairs of vertices. These constraints prevent the formation of directed paths that violate precedence relationships along the tree. We show that this problem is NP-hard, and we introduce a new scalable mixed integer linear programming model for it. With respect to the previous models, the newly proposed model performs substantially better. This work also introduces a new variation on the minimum-cost arborescence problem with precedence constraints. We show that this new variation is also NP-hard, and we propose several mixed integer linear programming models for formulating the problem.
Collaborative filtering (CF), the most widely used recommendation algorithm, has to face the sparsity and scalability problem. Some researchers proposed to select a representative set of real users ...called information core (IC) from all the real users, which is used as the candidate neighbor set in the CF to alleviate the scalability problem. However, the rating vectors of these real users that compose IC are usually sparse, which will negatively affect the recommendation accuracy. In this paper, a virtual information core (VIC) optimization algorithm is proposed based on clustering and evolutionary algorithms for CF recommendation (VICO-CEA). The problem of searching for VIC is modeled as a combinatorial optimization problem, and is solved offline by the proposed evolutionary algorithm. The VIC is a set of virtual core users, each of which is constructed by averaging out multiple real users. These virtual core users in the VIC are no longer sparse and are found by the evolutionary optimization, which will improve the recommendation accuracy and reduce the online recommendation time as the VIC is used as the candidate neighbor set in the CF. Meanwhile, to make offline optimization more efficient, two strategies are proposed. One is to design a simple similarity measure based on dimensionality reduction and clustering to save time in calculating similarities by reducing the dimensionality of users’ rating vectors. The other is to use dimensionality reduction and clustering to construct a smaller training set and validation set by reducing the dimensionality of items’ rating vectors. The experimental results show that VICO-CEA can not only significantly reduce the online recommendation time further but also improve the recommendation accuracy greatly compared to traditional CF and other information-core-based methods.
Display omitted
•Constructing virtual information core is modeled as an optimization problem.•A new evolutionary algorithm (EA) is proposed to solve the optimization problem.•A new similarity calculation method is introduced to reduce the time of EA.•Dimensionality reduction and clustering are used to further reduce the time of EA.•The new method improves the recommendation quality with less online time.
The capacitated electric vehicle routing problem (CEVRP) extends the traditional vehicle routing problem by simultaneously considering the service order of the customers and the recharging schedules ...of the vehicles. Due to its NP-hard nature, we decompose the original problem into two sub-problems: a capacitated vehicle routing problem (CVRP) and a fixed route vehicle charging problem (FRVCP). A highly effective threshold acceptance based multi-layer search (TAMLS) algorithm is proposed to quickly obtain high-quality solutions. TAMLS consists of three layers. An iterated thresholding search procedure and a thresholding selection procedure are employed to produce diversified CVRP solutions in the first layer and to screen out high quality ones in the second layer, respectively. In the third layer, a removal heuristic coupling with an enumeration method is adopted to solve FRVRP, which produces optimized charging schedules. Extensive computational results show that TAMLS outperforms the state-of-the-art algorithms in terms of both solution quality and computation time. In particular, it is able to obtain new best results for 11 out of 17 benchmark instances, and reach the best known results on the remaining 6 instances. Additional experimental analyses are performed to better understand the contributions of key algorithmic components.
•A GRASP-based constructive algorithm is proposed to solve combinatorial dynamic optimization problems.•Tests are conducted on time-varying multi-dimensional knapsack problem with dimensional ...changes.•Improved evolutionary algorithms, employing hyper-heuristic based local search procedure are developed.•A hyper-heuristic algorithm, using other metaheuristics as low-level heuristics is developed.•Results of extensive statistical analyses point out the competiveness of the proposed algorithms.
Optimization in dynamic environments is a hot research area that has attracted a notable attention in the past decade. It is clear from the dynamic optimization literature that most of the effort is devoted to continuous dynamic optimization problems although majority of the real-life problems are combinatorial. Additionally, in comparison to evolutionary or population-based approaches, constructive search strategy, which is shown to be successful in stationary combinatorial optimization problems, is commonly ignored by the dynamic optimization community. In the present work, a constructive and multi-start search strategy is proposed to solve dynamic multi-dimensional knapsack problem, which has numerous applications in real world. Making use of constructive and multi-start features, the aim here is to test the performance of such a strategy and to observe its behavior in dynamically changing environments. In this regard, this strategy is compared to the well-known evolutionary and population-based approaches, including a Genetic Algorithm-based memetic algorithm, Differential Evolution algorithm, Firefly Algorithm and a hyper-heuristic, which employs these population-based algorithms as low-level heuristics in accordance with their individual contributions. Furthermore, in order to improve their performances in dynamic environments, the mentioned evolutionary algorithms are enhanced by using triggered random immigrants and adaptive hill climbing strategies. As one can see from the comprehensive experimental analysis, while the proposed approach outperforms most of the evolutionary-based approaches, it is outperformed by firefly and hyper-heuristic algorithms in some of the instances. This points out competiveness of the proposed approaches. Finally, according to the statistical results of non-parametric tests, one can conclude that the proposed approach can be considered as a promising and a competitive algorithm in dynamic environments.
We present a new approximation algorithm for the minimum 2-edge-connected spanning subgraph problem. Its approximation ratio is 43, which matches the current best ratio. The approximation ratio of ...the algorithm is 65 on subcubic graphs, which is an improvement upon the previous best ratio of 54. The algorithm is a novel extension of the primal-dual schema, which consists of two distinct phases. Both the algorithm and the analysis are much simpler than those of the previous approaches.
Dynamic task scheduling problem in cloud manufacturing (CMfg) is always challenging because of changing manufacturing requirements and services. To make instant decisions for task requirements, deep ...reinforcement learning-based (DRL-based) methods have been broadly applied to learn the scheduling policies of service providers. However, the current DRL-based scheduling methods struggle to fine-tune a pre-trained policy effectively. The resulting training from scratch takes more time and may easily overfit the environment. Additionally, most DRL-based methods with uneven action distribution and inefficient output masks largely reduce the training efficiency, thus degrading the solution quality. To this end, this paper proposes an improved DRL-based approach for dynamic task scheduling in CMfg. First, the paper uncovers the causes behind the inadequate fine-tuning ability and low training efficiency observed in existing DRL-based scheduling methods. Subsequently, a novel approach is proposed to address these issues by updating the scheduling policy while considering the distribution distance between the pre-training dataset and the in-training policy. Uncertainty weights are introduced to the loss function, and the output mask is extended to the updating procedures. Numerical experiments on thirty actual scheduling instances validate that the solution quality and generalization of the proposed approach surpass other DRL-based methods at most by 32.8% and 28.6%, respectively. Additionally, our method can effectively fine-tune a pre-trained scheduling policy, resulting in an average reward increase of up to 23.8%.
•Derivation of arc flow models from an underlying dynamic programming network.•Structure of arc-flow models: primal and dual insights.•Review of state space relaxation as a tool to balance the size ...and strength of models.•Review of general solution methods.•Review applications in different areas, as, e.g., cutting, scheduling and routing.
Network flow formulations are among the most successful tools to solve optimization problems. Such formulations correspond to determining an optimal flow in a network. One particular class of network flow formulations is the arc flow, where variables represent flows on individual arcs of the network. For NP-hard problems, polynomial-sized arc flow models typically provide weak linear relaxations and may have too much symmetry to be efficient in practice. Instead, arc flow models with a pseudo-polynomial size usually provide strong relaxations and are efficient in practice. The interest in pseudo-polynomial arc flow formulations has grown considerably in the last twenty years, in which they have been used to solve many open instances of hard problems. A remarkable advantage of pseudo-polynomial arc flow models is the possibility to solve practical-sized instances directly by a Mixed Integer Linear Programming solver, avoiding the implementation of complex methods based on column generation.
In this survey, we present theoretical foundations of pseudo-polynomial arc flow formulations, by showing a relation between their network and Dynamic Programming (DP). This relation allows a better understanding of the strength of these formulations, through a link with models obtained by Dantzig-Wolfe decomposition. The relation with DP also allows a new perspective to relate state-space relaxation methods for DP with arc flow models. We also present a dual point of view to contrast the linear relaxation of arc flow models with that of models based on paths and cycles. To conclude, we review the main solution methods and applications of arc flow models based on DP in several domains such as cutting, packing, scheduling, and routing.
The deployment of charging infrastructure for electric vehicles is crucial in extending their range. Many studies on the charging infrastructure deployment adopt the Mixed Integer Linear Programming ...(MILP) method to optimize various objectives. However, as the number of integer variables and constraints increases, the computational time and memory requirements of MILP models increase exponentially. This makes it impractical to use MILP models to solve large-scale optimization problems. In this paper, we formulate and prove that the Planning of Electric Vehicle Charging Stations (PEVCS) is an NP-complete combinatorial optimization problem. We also prove that PEVCS has a significant effect, that is, submodularity. Additionally, we propose two efficient methods that use submodularity to improve the conventional methodology for PEVCS. Furthermore, we provide a provable guarantee for the performance of our proposed methods. Our experimental results demonstrate the efficiency and effectiveness of these methods on small-scale and large-scale datasets, especially in realistic large-scale situations.
•Proving that PEVCS has a significant diminishing effect, i.e., submodularity.•Presenting two efficient methods to solve the PEVCS problem by using submodularity.•Providing a provable guarantee for the performance of the proposed methods.•Verifying the performance of the presented methods in a large-scale realistic environment with experiments.
•Problem of minimizing a supermodular set function on comatroid is considered.•A counterexample is provided to an existing guarantee for a greedy algorithm.•The error in the main proof is identified.
...We provide a counterexample to the performance guarantee obtained in the paper “Il’ev, V., Linker, N., 2006. Performance guarantees of a greedy algorithm for minimizing a supermodular set function on comatroid”, which was published in Volume 171 of the European Journal of Operational Research. We point out where this error originates from in the proof of the main theorem.
The rooted Budgeted Cycle Cover (BCC) problem is a fundamental optimization problem arising in wireless sensor networks and vehicle routing. Given a metric space (V,w) with vertex set V consisting of ...two parts D (containing depots) and V∖D (containing nodes), and a budget B≥0, the rooted BCC problem asks to find a minimum number of cycles to cover all the nodes in V∖D, satisfying that each cycle has length at most B and each cycle must contain a depot in D. In this paper, we give new approximation algorithms for the rooted BCC problem. For the rooted BCC problem with single depot, we give an O(logBμ)-approximation algorithm, where μ is the minimum difference of any two different distances between the vertices in V and the root. For the rooted BCC problem with multiple depots, we give an O(logn)-approximation algorithm, where n is the number of vertices. We also test our algorithms on the randomly generated instances. The experimental results show that the algorithms have good performance in practice.
•A log-factor approximation algorithm for the single depot budgeted cycle cover problem.•A log-factor approximation algorithm for the multi-depot budgeted cycle cover problem.•Experiments on randomly generated instances show good performance of the algorithms.