Automatic tool breakage monitoring (TBM) is a vital technology of the unmanned workshops and automatic production lines for CNC machining. The current solutions for TBM mostly rely on direct data ...retrieved from the external sensors mounted on a machine tool, e.g., force and vibration sensors, which complicates the entire TBM system and adds additional costs. In this paper, instead of relying on the external sensors, indirect data from the CNC system, e.g., the spindle power, is utilized for the TBM. As the spindle power is a comprehensive signal, which not only contains a component of the cutting force, the direct factor determining tool breakage, but also involves the Coulomb friction and viscous damping forces, which are thermally sensitive and nonlinear, as well as the other components, such as the inertial force. For tool breakage to be effectively and accurately recognized, the spindle power data are preprocessed based on a mapping relationship between the spindle power and block numbers, with the one affecting tool breakage isolated via empirical mode decomposition (EMD). From the extracted signal, a support vector machine (SVM)-based method is then proposed to identify tool breakage. With the proposed method, actual machining experiments were conducted on a CNC milling center to verify that the proposed method can successfully extract the signal associated with tool breakage from the spindle power and that tool breakage can be accurately detected whenever it happens.
This paper addresses how much flood water can be conserved for use after the flood season through the operation of reservoir by taking into account the residual flood control capacity (the difference ...between flood conveyance capacity and the expected inflow in a lead time). A two‐stage model for dynamic control of the flood‐limited water level (the maximum allowed water level during the flood season, DC‐FLWL) is established considering forecast uncertainty and acceptable flood risk. It is found that DC‐FLWL is applicable when the reservoir inflow ranges from small to medium levels of the historical records, while both forecast uncertainty and acceptable risk in the downstream affect the feasible space of DC‐FLWL. As forecast uncertainty increases (under a given risk level) or as acceptable risk level decreases (under a given forecast uncertainty level), the minimum required safety margin for flood control increases, and the chance for DC‐FLWL decreases. The derived hedging rules from the modeling framework illustrate either the dominant role of water conservation or flood control or the trade‐off between the two objectives under different levels of forecast uncertainty and acceptable risk. These rules may provide useful guidelines for conserving water from flood, especially in the area with heavy water stress. The analysis is illustrated via a case study with a real‐world reservoir in northeastern China.
Key Points:
Analytical framework for flood water conservation
Forecast uncertainty and acceptable risk affect the scope
Two‐dimensional hedging rules are derived for water conservation and flood control
Feed rate in the computerized numerical control (CNC) milling is an essential parameter that could affect both the machining efficiency and the working conditions of the machine tool. In this paper, ...we proposed a multi-objective feed rate optimization method of three-axis rough milling. The in-process machining data as generated in the machining process is calculated and aligned for building the data-based model of spindle power, and an artificial neural network (ANN)-based modeling approach is proposed for the spindle power. Based on the proposed model, a multi-objective optimization framework is presented to optimize the feed rate for the objectives of increasing the machining efficiency and the loading stability of the spindle. To validate the feasibility and advantage of the proposed methods, a set of machining experiments are conducted, showing that our proposed ANN-based model has good accuracy in terms of predicting the spindle power, as well as that the feed rate optimization framework as solved based on the multi-objective evolutionary algorithm based on decomposition (MOEA/D) can effectively improve the machining efficiency and reduce the fluctuation of spindle power.
This paper presents a new Two Stage Bayesian Stochastic Dynamic Programming (TS‐BSDP) model for real time operation of cascaded hydropower systems to handle varying uncertainty of inflow forecasts ...from Quantitative Precipitation Forecasts. In this model, the inflow forecasts are considered as having increasing uncertainty with extending lead time, thus the forecast horizon is divided into two periods: the inflows in the first period are assumed to be accurate, and the inflows in the second period assumed to be of high uncertainty. Two operation strategies are developed to derive hydropower operation policies for the first and the entire forecast horizon using TS‐BSDP. In this paper, the newly developed model is tested on China's Hun River cascade hydropower system and is compared with three popular stochastic dynamic programming models. Comparative results show that the TS‐BSDP model exhibits significantly improved system performance in terms of power generation and system reliability due to its explicit and effective utilization of varying degrees of inflow forecast uncertainty. The results also show that the decision strategies should be determined considering the magnitude of uncertainty in inflow forecasts. Further, this study confirms the previous finding that the benefit in hydropower generation gained from the use of a longer horizon of inflow forecasts is diminished due to higher uncertainty and further reveals that the benefit reduction can be substantially mitigated through explicit consideration of varying magnitudes of forecast uncertainties in the decision‐making process.
Key Points:
Two stage BSDP is developed
Considering varying levels of forecast uncertainty improves performance
Benefit reduction from use of longer inflow forecasts is mitigated
This paper proposed a feedrate optimization method of end milling using the internal data of the CNC system, i.e., the spindle power, the block number, and the combined speed of feed axes, based on ...the controlled elitist non-dominated sorting genetic algorithm (i.e., the controlled NSGA-II) to address the multi-objective non-linear optimization problem for simultaneously increasing the machining efficiency and decreasing the fluctuation of the spindle power. To establish the objective functions and their constraint conditions in the optimization process, a spindle power–predicting model considering different milling operations, i.e., the up-milling operation and the down-milling operation, is proposed, from which the spindle power can be accurately predicted. Compared to the traditional method of optimizing the feedrate via the cutting force, the spindle power used in the proposed method is better because it is more convenient to acquire and is cost effective. To validate the proposed methods, a set of experiments are conducted to prove the feasibility of our spindle power prediction model as well as the advantage of the controlled NSGA-II-based method in improving the machining efficiency and balancing the tool load.
Contour error compensation of the computer numerical control (CNC) machine tool is a vital technology that can improve machining accuracy and quality. To achieve this goal, the tracking error of a ...feeding axis, which is a dominant issue incurring the contour error, should be firstly modeled and then a proper compensation strategy should be determined. However, building the precise tracking error prediction model is a challenging task because of the nonlinear issues like backlash and friction involved in the feeding axis; besides, the optimal compensation parameter is also difficult to determine because it is sensitive to the machining tool path. In this paper, a set of novel approaches for contour error prediction and compensation is presented based on the technologies of deep learning and reinforcement learning. By utilizing the internal data of the CNC system, the tracking error of the feeding axis is modeled as a Nonlinear Auto-Regressive Long-Short-Term Memory (NAR-LSTM) network, considering all the nonlinear issues of the feeding axis. Given the contour error as calculated based on the predicted tracking error of each feeding axis, a compensation strategy is presented with its parameters identified efficiently by a Time-Series Deep Q-Network (TS-DQN) as designed in our work. To validate the feasibility and advantage of the proposed approaches, extensive experiments are conducted, testifying that our approaches can predict the tracking error and contour error with very good precision (better than about 99% and 90% respectively), and the contour error compensated based on the predicted results and our compensation strategy is significantly reduced (about 60~85% reduction) with the machining quality improved drastically (machining error reduced about 50%).
•Both input and output data of GLDAS are evaluated.•GLDAS downward solar radiation and wind speed data are overestimated.•Correction equations are developed to revise solar radiation and wind ...speed.•GLDAS land surface temperatures are highly accurate.•Seasonal and spatial distributions of fluxes are accurate after corrections.
Observed water and energy fluxes are sparse in many regions of the world. The overall aim of this study is to demonstrate the applicability of Global Land Data Assimilation System–Noah (GLDAS/Noah) data for basin scale water and energy studies in terms of input, output, seasonal and spatial distributions. A Water and Energy Budget-based Distributed Hydrological Model (WEB-DHM) is employed to evaluate the output of GLDAS/Noah and the simulations of seasonal and spatial distributions of fluxes after calibration with discharges and MODIS land surface temperatures (LSTs) in a semiarid catchment. GLDAS/Noah air temperatures and humidity agree well with observations, but GLDAS/Noah overestimates downward solar radiation and wind speed. LSTs and upward long wave radiation from GLDAS/Noah and WEB-DHM are comparable, but GLDAS/Noah shows larger upward shortwave, net radiation, latent heat, sensible heat fluxes and smaller ground heat flux amplitude. Two correction functions are developed for downward solar radiation and wind speed. The accuracy of discharges and LSTs is improved after corrections. The simulated seasonal and spatial distributions of water and energy fluxes and states (LSTs, evapotranspiration, surface, root, deep soil wetness, ground heat flux, latent heat flux, sensible heat flux, upward long wave radiation and upward shortwave radiation) show high accuracy using corrected GLDAS/Noah data. The findings provide an insight into the applicability of GLDAS/Noah.
Flood water conservation can be beneficial for water uses especially in areas with water stress but also can pose additional flood risk. The potential of flood water conservation is affected by many ...factors, especially decision makers' preference for water conservation and reservoir inflow forecast uncertainty. This paper discusses the individual and joint effects of these two factors on the trade‐off between flood control and water conservation, using a multiobjective, two‐stage reservoir optimal operation model. It is shown that hedging between current water conservation and future flood control exists only when forecast uncertainty or decision makers' preference is within a certain range, beyond which, hedging is trivial and the multiobjective optimization problem is reduced to a single objective problem with either flood control or water conservation. Different types of hedging rules are identified with different levels of flood water conservation preference, forecast uncertainties, acceptable flood risk, and reservoir storage capacity. Critical values of decision preference (represented by a weight) and inflow forecast uncertainty (represented by standard deviation) are identified. These inform reservoir managers with a feasible range of their preference to water conservation and thresholds of forecast uncertainty, specifying possible water conservation within the thresholds. The analysis also provides inputs for setting up an optimization model by providing the range of objective weights and the choice of hedging rule types. A case study is conducted to illustrate the concepts and analyses.
Key Points
Exploring hedging rules for reservoir operation with multiple objectives
Determining the maximum allowable forecast uncertainty for a given inflow level
Determining the range of preferences enabling hedging and critical values switching policies
Risk analysis is vital for reservoir flood control operation considering forecast because forecast uncertainties may create risks of multiple hazard events, including the single-hazard events, the ...union and intersection of single-hazard events (UHE and IHE). The probability of UHE and IHE are the major concern of decision-makers. This research proposes an analytical flood risk probability calculation framework for single-hazard event, UHE and IHE caused by uncertainties in flood forecasting and takes Dahuofang Reservoir, located in the Hunhe River basin, Northeast China, as a case study. In the framework, the risk is calculated by the integral of the joint probability density function (Pdf) of the risk sources over the hazard domain, which is the collection of the values of risk sources that cause hazards. The determination method of the hazard domain and the Pdf of risk sources and the procedure of risk analysis are elaborated. Results of single-hazard events show that the proposed methodology is of higher precision compared with the risk analysis method based on the law of total probability. Meanwhile, the flood risks of UHE and IHE are calculated for the reservoir operation under different levels of flood limit water level. The study provides a new method of flood risk analysis for flood control reservoir operation.
•A framework for dynamically quantifying algorithm parameter impacts is developed.•Interactions among parameters have significant influence on algorithm performance.•Reflection parameter of SCE-UA ...can be more influential than complex number.
It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.