The conceptual design decisions have the largest influence on a building project’s safety, value, and environmental impact; hence they are commonly assigned to a “senior engineer” to make use of ...his/her experience. However, the senior engineers can be biased towards solutions inside their area of expertise, which often prevents them from finding the best solutions among alternatives that must consider complex inter-related, and multi-disciplinary parameters. The engineering community could benefit from a rapid and high-quality decision-making method or tool to increase the speed and quality of its high-impact design choices. There are valuable studies in the literature exploiting Artificial Intelligence (AI) to improve the structural design process; however, most of them focus on the final design stage (e.g., Building Information Modeling), and the rest requires an existing project database (e.g., architectural drawings, already decided material types) to propose a small number of initial design alternatives. In this article, we present the development and validation of a genetic algorithm tool based on Non-dominated Sorted Genetic Algorithm II (NSGA-II) that can be used to analyse a wide range of safe, economical and low-CO2 options for the conceptual design of buildings. The design space starts from a design brief (with only the information about the site characteristics and project objectives). The solutions are explored with the material, grid size, floor type, lateral resistance, and foundation system variables. In a short computational time (< 2 min per run), users are provided with a Pareto graph of a large set of feasible solutions (in terms of cost, embodied CO2 emissions and free space) that an engineer would not be typically able to evaluate within a traditional conceptual design process. For future applications, the methodology presented in this paper is flexible to include more engineering materials (e.g., timber, masonry, structural glass), complex architectural forms and merge other disciplines in decision making (e.g., building physics construction management, fire safety).
Optical logic gates play a crucial role in all-optical signal processing systems. Traditional methods of designing logic gates require manual adjustment of structural parameters. In this paper, we ...utilize a genetic algorithm for inverse design, and the optical AND, OR, and NOT logic gates are achieved on a silicon platform at the working wavelength of 1.55 μm. The total area of the logic gates is fixed at 2.2 μm × 2.2 μm, convenient to be integrated with other functional devices, the optimized structural parameters are acquired for different logic gates and the contrast ratios of the OR, AND, and NOT gates are 8.55, 5.32, and 4.14 dB, respectively. The design is characterized by a compact structure, high contrast, and a high degree of freedom, offering a valuable reference for photonic integrated circuits.
•A compact optical logic gate with rectangular air hole array is designed and high performance is achieved.•The optimization efficiency was enhanced by GA, and optical logic gates were achieved with ultra-small size.•The influence of air hole’s variation to device’s performance was studied, and it guided the actual fabrication.
A runtime analysis of the Simple Genetic Algorithm (SGA) for the OneMax problem has recently been presented proving that the algorithm with population size μ≤n1/8−ε requires exponential time with ...overwhelming probability. This paper presents an improved analysis which overcomes some limitations of the previous one. Firstly, the new result holds for population sizes up to μ≤n1/4−ε which is an improvement up to a power of 2 larger. Secondly, we present a technique to bound the diversity of the population that does not require a bound on its bandwidth. Apart from allowing a stronger result, we believe this is a major improvement towards the reusability of the techniques in future systematic analyses of GAs. Finally, we consider the more natural SGA using selection with replacement rather than without replacement although the results hold for both algorithmic versions. Experiments are presented to explore the limits of the new and previous mathematical techniques.
•A logistics distribution region partitioning model is developed.•This model is to minimize the cost of two-echelon logistics distribution network.•A hybrid algorithm with PSO and GA is proposed.•The ...empirical results reveal that EPSO–GA algorithm outperforms other algorithms.
Two-echelon logistics distribution region partitioning is a critical step to optimize two or multi-echelon logistics distribution network, and it aims to assign distribution unit to a certain logistics facility (i.e. logistic center and distribution center). Given the partitioned regions, vehicle routing problem can be further developed and solved. This paper established a model to minimize the total cost of the two-echelon logistics distribution network. A hybrid algorithm named as the Extended Particle Swarm Optimization and Genetic Algorithm (EPSO–GA) is proposed to tackle the model formulation. A two-dimensional particle encoding method is adopted to generate the initial population of particles. EPSO–GA combines the merits of Particle Swarm Optimization (PSO) algorithm and Genetic Algorithm (GA) with both global and local search capability. By updating the inertia weight and exchanging best-fit solutions and worst-fit solutions between PSO and GA, EPSO–GA algorithm is able to converge to an optimal solution with a reasonable design of termination and iteration rules. The computation results from a case study in Guiyang city, China, reveal that EPSO–GA algorithm is superior to the other three algorithms, Hybrid Particle Swarm Optimization (HPSO), GA, and Ant Colony Optimization (ACO), in terms of the partitioning schemes, the total cost and number of iterations. By comparing with the exact method, the proposed approach demonstrates its capability to optimize a small scale two-echelon logistics distribution network. The proposed approach can be readily implemented in practice to assist the logistics operators reduce operational costs and improve customer service. In addition, the proposed approach is of great potential to apply in other research domains.
Machine learning algorithms have been used widely in various applications and areas. To fit a machine learning model into different problems, its hyper-parameters must be tuned. Selecting the best ...hyper-parameter configuration for machine learning models has a direct impact on the model’s performance. It often requires deep knowledge of machine learning algorithms and appropriate hyper-parameter optimization techniques. Although several automatic optimization techniques exist, they have different strengths and drawbacks when applied to different types of problems. In this paper, optimizing the hyper-parameters of common machine learning models is studied. We introduce several state-of-the-art optimization techniques and discuss how to apply them to machine learning algorithms. Many available libraries and frameworks developed for hyper-parameter optimization problems are provided, and some open challenges of hyper-parameter optimization research are also discussed in this paper. Moreover, experiments are conducted on benchmark datasets to compare the performance of different optimization methods and provide practical examples of hyper-parameter optimization. This survey paper will help industrial users, data analysts, and researchers to better develop machine learning models by identifying the proper hyper-parameter configurations effectively.
Hydrodynamic models with rain-on-the-grid capabilities are usually computationally expensive for automatic parameter estimation. In this paper, we present a global optimization-based algorithm to ...calibrate a fully distributed hydrologic-hydrodynamic and water quality model (HydroPol2D) using observed data (i.e., discharge, or pollutant concentration) as input. The algorithm finds near-optimal set of parameters to explain observed gauged data. This framework, although applied in a poorly-gauged urban catchment, is adapted for catchments with more detailed observations. The results of the automatic calibration indicate NSE = 0.99 for the V-Tilted catchment, RMSE = 830 mg L-1 for salt concentration pollutograph in a wooden-plane (i.e., 8.3% of the event mean concentration), and NSE = 0.89 in a urban real-world catchment. This paper also explores the issue of equifinality (i.e., multiple parameters giving the same calibration performance) in model calibration indicating the performance variation of calibrating only with an outlet gauge or with multiple gauges within the catchment.
Display omitted
•An automatic calibration algorithm for distributed flood and water quality modeling is developed.•It uses HydroPol2D model and calibrate water quantity and quality parameters globally.•Data from observed gauges such as discharges, depths, and concentration is used for calibration.•Poorly placed gauges and low runoff events can increase equifinality during calibration.
Mobile edge computing (MEC) plays a significant role in reducing network delay for Mobile Augmented Reality (MAR) services by caching these services close to the User Equipments (UEs). These MAR ...services collect UEs' network traffic and orientation information, and generate the service results back to UEs. However, the UE's mobility features change network traffic and orientation, negatively impacting MAR services' access frequencies and service preferences. Moreover, the changed access frequencies also influence the workload of cached MAR services, resulting in the uneven workload of edge servers. Therefore, this paper formalizes cooperative service caching based on UEs' location and orientation to optimize network delay and response fairness in MEC environments. To solve the problem, we propose a Service Caching strategy based on Regional Mobility features Awareness (SCRMA) algorithm, which consists of two stages. Firstly, the Regional Mobility features Awareness (RMA) algorithm perceives the user mobility features and service preferences, which provides a prerequisite for determining service caching strategy. Then, a Service Caching strategy based on a Genetic Algorithm (SCGA) is proposed to optimize network delay and response fairness. The simulation experiment on a real dataset shows that our service caching strategy averagely reduces network delay, fairness factor, and total cost by 11.49%, 33.24%, and 17.86% compared with the existing algorithms, respectively.
In smart grids, one of the most important objectives is the ability to improve the grid's situational awareness and allow for fast-acting changes in power generation. In such systems, an energy ...management system should gather all the needed information, solve an optimization problem, and communicate back to each distributed energy resource (DER) its correct allocation of energy. This paper proposes a memory-based genetic algorithm (MGA) that optimally shares the power generation task among a number of DERs. The MGA is utilized for minimization of the energy production cost in the smart grid framework. It shares optimally the power generation in a microgrid including wind plants, photovoltaic plants, and a combined heat and power system. In order to evaluate the performance of the proposed approach, the results obtained by the MGA are compared with the results found by a genetic algorithm and two variants of particle swarm optimization. Simulation results accentuate the superiority of the proposed MGA technique.
Autonomous pilot is crucial in integrally promoting the autonomy of an unmanned surface vehicle (USV). However, the integration mechanism of decision and control is still unclear within the entire ...autonomy. In this paper, by organically bridging path planning and tracking, an autonomous pilot framework with waypoints generation, path smoothing and policy guidance of a USV in congested waters is established, for the first time. Incorporating elite and diversity operations into the genetic algorithm (GA), an elite-duplication GA (EGA) strategy is devised to optimally generate sparse waypoints in a constrained space. The B-spline technique is further deployed to make flexibly smooth interpolation facilitating path smoothing supported by optimal sparse-waypoints. Seamlessly bridged by the parametric smooth path, deep reinforcement learning (DRL) technique is resorted to continuously extract in-depth pilotage policies, i.e., mappings from path tracking errors, collision risks and control constraints to continuous control forces/torques. Eventually, the entire spline-bridged EGA-DRL (SED) framework merits autonomous global-pilotage and local-reaction in an organically modular manner. Comprehensive validations and comparisons in various real-world geographies demonstrate the effectiveness and superiority of the proposed SED autonomous pilot framework.