This paper discusses methods for generating or approximating the Pareto set of multiobjective optimization problems by solving a sequence of constrained single-objective problems. The necessity of ...determining the constraint value a priori is shown to be a serious drawback of the original epsilon-constraint method. We therefore propose a new, adaptive scheme to generate appropriate constraint values during the run. A simple example problem is presented, where the running time (measured by the number of constrained single-objective sub-problems to be solved) of the original epsilon-constraint method is exponential in the problem size (number of decision variables), although the size of the Pareto set grows only linearly. We prove that––independent of the problem or the problem size––the time complexity of the new scheme is
O(
k
m−1
), where
k is the number of Pareto-optimal solutions to be found and
m the number of objectives. Simulation results for the example problem as well as for different instances of the multiobjective knapsack problem demonstrate the behavior of the method, and links to reference implementations are provided.
Recent studies have shown that cell cycle and cell volume are confounding factors when studying biological phenomena in single cells. Here we present a combined experimental and computational method, ...CellCycleTRACER, to account for these factors in mass cytometry data. CellCycleTRACER is applied to mass cytometry data collected on three different cell types during a TNFα stimulation time-course. CellCycleTRACER reveals signaling relationships and cell heterogeneity that were otherwise masked.
Over the past few years, the research on evolutionary algorithms has demonstrated their niche in solving multiobjective optimization problems, where the goal is to find a number of Pareto-optimal ...solutions in a single simulation run. Many studies have depicted different ways evolutionary algorithms can progress towards the Pareto-optimal set with a widely spread distribution of solutions. However, none of the multiobjective evolutionary algorithms (MOEAs) has a proof of convergence to the true Pareto-optimal solutions with a wide diversity among the solutions. In this paper, we discuss why a number of earlier MOEAs do not have such properties. Based on the concept of ɛ-dominance, new archiving strategies are proposed that overcome this fundamental problem and provably lead to MOEAs that have both the desired convergence and distribution properties. A number of modifications to the baseline algorithm are also suggested. The concept of ɛ-dominance introduced in this paper is practical and should make the proposed algorithms useful to researchers and practitioners alike.
► We address the problem of finding Pareto front approximations of given size. ► Two algorithms are proposed and their convergence properties analyzed. ► The first algorithm uses a new multi-level ...grid archiving (MGA) strategy. ► The second algorithm uses epsilon-adaptation to find an optimal k-subset. ► Both algorithms converge almost surely to a subset of the Pareto front.
In this paper we investigate to what extent random search methods, equipped with an archive of bounded size to store a limited amount of solutions and other data, are able to obtain good Pareto front approximations. We propose and analyze two archiving schemes that allow for maintaining a sequence of solution sets of given cardinality that converge with probability one to an
ϵ-Pareto set of a certain quality, under very mild assumptions on the process used to sample new solutions. The first algorithm uses a hierarchical grid to define a family of approximate dominance relations to compare solutions and solution sets. Acceptance of a new solution is based on a potential function that counts the number of occupied boxes (on various levels) and thus maintains a strictly monotonous progress to a limit set that covers the Pareto front with non-overlapping boxes at finest resolution possible. The second algorithm uses an adaptation scheme to modify the current value of
ϵ based on the information gathered during the run. This way it will be possible to achieve convergence to the best (smallest)
ϵ value, and to a corresponding solution set of
k solutions that
ϵ-dominate all other solutions, which is probably the best possible result regarding the limit behavior of random search methods or metaheuristics for obtaining Pareto front approximations.
Railway networks are operated more and more at capacity margins, schedules are becoming more susceptible to disturbances, and delays propagate and hamper the service level experienced by the ...customers. As a consequence railway traffic management is becoming increasingly challenging, thus motivating the development of computer-aided systems. This paper proposes a dispatching assistant in the form of a model predictive control framework for a complex central railway station area. The closed-loop discrete-time system suggests rescheduling trains according to solutions of a binary linear optimization model. The model assigns precomputed blocking-stairways to trains while respecting resource-based clique constraints, connection constraints, platform related constraints and consistency constraints with the objective of maximizing customer satisfaction. In collaboration with the Swiss Federal Railways (SBB), the approach was successfully applied for an operational day at the central railway station area Berne, Switzerland. The model is capable of considering many alternative routing possibilities and departure timings, a potential of our approach, which can also be deduced from computational results.
We study the coordination of two-echelon supply chains via service level-dependent bonus and penalty contracts and assume that the manufacturer maximizes its profit while enabling the supplier to ...achieve its performance target. Profit and return on investment (ROI) are considered as the supplier's performance measures. We compute optimal contract parameters and perform a numerical study. In addition, we perform a simulation-based study to analyze the profit risk of the considered parties. For supplier profit targets, bonus and penalty contracts lead to the same profit for both parties, but bonus contracts lead to lower costs, hence, a higher ROI. For supplier ROI targets, bonus contracts always lead to higher manufacturer profit, and no optimal penalty contracts exist. For every contract service level, optimal bonus contracts exist that maximize profit and ROI of the manufacturer simultaneously. Thus, we analyze the profit risk for the manufacturer and the supplier and show that it can be significantly reduced by setting an appropriate contract service level. The manufacturer usually should prefer bonus contracts. They allow to optimize profit and ROI simultaneously and even offer an additional degree of freedom to reduce the profit risk.
•Typology of multi-item multi-echelon inventory systems.•Bibliography of papers under stochastic demand from 1958-2016.•Extensive discussion of modelling assumptions.•Identification of gaps to be ...filled in relation to real-world systems.•Identification of scientific challenges to close this gap.
We develop a typology for multi-echelon inventory management. The typology is used to classify and review the extensive research of multi-echelon inventory management under uncertain demand. We identify clusters of model assumptions, research goals and applied methodologies. Based on this review, existing research gaps and avenues for further research are proposed.
In the context of on-demand mobility services, we compare the performance of three matching policies of increasing sophistication on scenarios based on real data from the city of Chicago in the ...United States. The comparative study examines the influence of prebooking and ridesharing on the gap between the different policies. We find that more sophisticated approaches can improve the acceptance ratio and the service-level key performance indicators at the expense of longer computation times. Prebooking appears to consistently give an edge to more sophisticated policies by providing advance information and thus the flexibility to make better plans. The effect of ridesharing is less straightforward to isolate. But, again, prebooking helps more sophisticated approaches reduce excess ride time, a direct consequence of ridesharing.
The train rescheduling problem is quite a popular topic in the railway research community. Many approaches are available to reschedule traffic in a network partition, but very few works address the ...coordination of these partitions. In railway systems with very dense traffic, e.g. the Swiss one, it is not always possible to partition the network such that local rescheduling algorithms can work completely independently one from another. This paper proposes a coordination approach for adjacent local rescheduling algorithms. These algorithms are based on the Resource Conflict Graph model, which enables the representation of the interlocking system at a very fine granularity. Simulations on data from the Swiss Federal Railways show the validity of this approach to improve the consistency of decisions at the common boundaries of adjacent local rescheduling algorithms.
•Decomposition scheme considering topology, traffic density, and operational policies.•Non-hierarchical, iterative, coordination framework.•Approach developed specifically for the Resource Conflict Graph model.•Simulations with data from the Swiss railway network.
An ensemble prediction model for train delays Nair, Rahul; Hoang, Thanh Lam; Laumanns, Marco ...
Transportation research. Part C, Emerging technologies,
07/2019, Letnik:
104
Journal Article
Recenzirano
•Data-driven train delay forecasting using machine learning at scale.•Using operational data from Deutsche Bahn for 25,000 trains daily.•Online method combines machine learning and simulation.•Key ...pre-processing steps handled automatically.
A large-scale ensemble prediction model to predict train delays is presented. The ensemble model uses a disparate set of models, two statistical and one simulation-based to generate forecasts of train delays. The first statistical model is a context-aware random forest that accounts for network traffic states, such as likely stretch conflicts and current headway’s, exogenous weather, event, and work zone information. The second model is a kernel regression that captures train-specific dynamics. A mesoscopic simulation model that accounts for travel and dwell time variations as well as inferred track occupation conflicts, train connections and rolling stock rotations, is additionally considered. The models have been used in a proof of concept to forecast delays for nationwide passenger services network of Deutsche Bahn, which operates roughly 25,000 trains daily in Germany. Results demonstrate a 25% improvement potential in forecast correctness (fraction of predictions within one minute) and 50% reduction in root mean squared errors compared to the published schedule. The paper describes the models along with the big data challenges that were addressed in data storage, feature and model building, and computation.