The team orienteering problem is a variant of the well-known vehicle routing problem in which a set of vehicle tours are constructed in such in a way that: (i) the total collected reward received ...from visiting a subset of customers is maximized; and (ii) the length of each vehicle tour is restricted by a pre-specified limit. While most existing works refer to the deterministic version of the problem and focus on maximizing total reward, some degree of uncertainty (e.g., in customers' service times or in travel times) should be expected in real-life applications. Accordingly, this paper proposes a simheuristic algorithm for solving the stochastic team orienteering problem, where goals other than maximizing the expected reward need to be considered. A series of numerical experiments contribute to illustrate the potential of our approach, which integrates Monte Carlo simulation inside a metaheuristic framework.
The number of projects relying on volunteer computing and their complexity are growing fast. This distributed paradigm enables the gathering of idle resources (processing power and storage) to run ...large systems by providing scalable, practical and low cost platforms. The heterogeneity of the resources and their unreliable behavior call for advanced optimization methods. In particular, an efficient resource allocation is key for the systems' performance. This work presents a mathematical formulation and a solving approach based on a metaheuristic for the resource allocation problem. This approach is designed to deal with data-intensive applications, which must guarantee the availability of the data at all times. Moreover, a simheuristic is proposed to deal with the stochasticity of resources' quality. A set of computational experiments are performed to: (1) compare the performance of the metaheuristic and the simheuristic in a stochastic environment; and (2) quantify the effect of the stochasticity on the solutions.
In recent years, the use of public cloud platforms as infrastructure has been gaining popularity in many scientific areas and High Performance Computing (HPC) is no exception. These kinds of ...platforms can be used by system administrators as Test-Bed systems for evaluating and detecting performance inefficiencies in the I/O subsystem, and for taking decisions about the configuration parameters that have influence on the performance of an application, without compromising the performance of the production HPC system. In this paper, we propose a methodology to evaluate parallel applications by using virtual clusters as a test system. Our experimental validation indicates that virtual clusters are a quick and easy solution for system administrators, for analyzing the impact of the I/O system on the I/O kernels of the parallel applications and for taking performance decisions in a controlled environment.
Community networks are becoming increasingly popular due to the growing demand for network connectivity in both rural and urban areas. Community networks are owned and managed at the edge by ...volunteers. Their irregular topology, the heterogeneity of resources and their unreliable behavior claim for advanced optimization methods to place services in the network. In particular, an efficient service placement method is key for the performance of these systems. This work presents the Multi-Criteria Optimal Placement method, a novel and fast two-stage multi-objective method to place services in decentralized community network edge micro-clouds. A comprehensive set of computational experiments is carried out using real traces of Guifi.net, which is the largest production community network worldwide. According to the results, the proposed method outperforms both the random placement method used currently in Guifi.net and the Bandwidth-aware Service Placement method, which provides the best known solutions in the literature, by a mean gap in bandwidth gain of about 53% and 10%, respectively, while it also reduces the number of resources used.
•A methodology aware of the irregular topology and the heterogeneity of CN’s is key.•Our method outperforms the placement method used currently in Guifi.net.•Our method improves the best known solutions in the literature.
The uncapacitated facility location problem (UFLP) is a well-known combinatorial optimization problem that finds practical applications in several fields, such as logistics and telecommunication ...networks. While the existing literature primarily focuses on the deterministic version of the problem, real-life scenarios often involve uncertainties like fluctuating customer demands or service costs. This paper presents a novel algorithm for addressing the UFLP under uncertainty. Our approach combines a tabu search metaheuristic with path-relinking to obtain near-optimal solutions in short computational times for the determinisitic version of the problem. The algorithm is further enhanced by integrating it with simulation techniques to solve the UFLP with random service costs. A set of computational experiments is run to illustrate the effectiveness of the solving method.
This paper analyzes the single-source capacitated facility location problem (SSCFLP) with soft capacity constraints. Hence, the maximum capacity at each facility can be potentially exceed by ...incurring in a penalty cost, which increases with the constraint-violation gap. In some realistic scenarios, this penalty cost can be modelled as a piecewise function. As a result, the traditional cost-minimization objective becomes a non-smooth function that is difficult to optimise using exact methods. A mathematical model of this non-smooth SSCFLP is provided, and a biased-randomized iterated local search metaheuristic is proposed as a solving method. A set of computational experiments is run to illustrate our algorithm and test its efficiency.
Executing message-passing parallel applications on a large number of resources in an efficient way is not a trivial task. Due to the complex interaction between the parallel applications and the HPC ...system, many applications may suffer performance inefficiencies when they scale. To achieve an efficient use of these large-scale systems using thousands of cores, a point to consider before executing an application is to know its behavior in the system. In this work, we propose a novel methodology called P3S (Prediction of Parallel Program Scalability), which allows us to analyze and predict the scalability of message-passing applications on a given system. The methodology strives to use a bounded analysis time, and a reduced set of resources to predict the application behavior for large-scale. The experimental validation proves that the P3S is able to predict the application scalability with an average accuracy greater than 95 percent using a reduced set of resources.
The inventory routing problem (IRP) combines inventory management and delivery route‐planning decisions. This work presents a simheuristic approach that integrates Monte Carlo simulation within a ...variable neighborhood search (VNS) framework to solve the multiperiod IRP with stochastic customer demands. In this realistic variant of the problem, our goal is to establish the optimal refill policies for each customer–period combination, that is, those individual refill policies that minimize the total expected cost over the periods. This cost is the aggregation of both expected inventory and routing costs. Our simheuristic algorithm allows to consider the inventory changes between periods generated by the realization of the random demands in each period, which have an impact on the quantities to be delivered in the next period and, therefore, on the associated routing plans. A range of computational experiments are carried out in order to illustrate the potential of our simulation–optimization approach.
Volunteer computing is a type of large-scale heterogeneous distributed system where resources necessary to run the system are donated by volunteers. A draw back of volunteer computing is the ...unreliability of the donated resources, so redundancy is required to guarantee the fulfillment of tasks or the availability of data In this work we consider the problem of designing a directory service policy for a Distributed Volunteer Computing Micro-Blogging Service (DVCMBS). In such services, nodes donate storage space (repositories), which is managed by a centralized directory service that decides which nodes will store replicas of blogs, ensuring their online availability when bloggers are offline. Since nodes are under no obligation to remain online, the task of a DVCMBS directory service is to allocate blog replicas to online repositories, such that the rate of availability of all blogs is maximized. At the same time, since donated storage resources are limited and directory service operations use processing resources, minimizing the number of blog replicas generated (i.e., the inefficiency of the directory service) is critical. We present a simulation model of a DVCMBS, which uses a probabilistic sort and select approach to select host repositories for replicas and blogs to replicate. Exhaustive computational experiments analyze the trade-off between blog replica availability and efficiency, and identify the efficient directory service policies, with respect to availability and efficiency maximization.
•A framework for directory service resource allocation decision policies is provided.•An event-based and discrete time simulation model of a Blogging Service is proposed.•A scalable probabilistic sort and select resource allocation policy is proposed.•Computational tests comparing all possible directory service configurations.•Insights into the trade-off between system reliability and efficiency are provided.