Adam is an adaptive gradient descent approach that is commonly used in back-propagation (BP) algorithms for training feed-forward neural networks (FFNNs). However, it has the defect that it may ...easily fall into local optima. To solve this problem, some metaheuristic approaches have been proposed to train FFNNs. While these approaches have stronger global search capabilities enabling them to more readily escape from local optima, their convergence performance is not as good as that of Adam. The proposed algorithm is an ensemble of differential evolution and Adam (EDEAdam), which integrates a modern version of the differential evolution algorithm with Adam, using two different sub-algorithms to evolve two sub-populations in parallel and thereby achieving good results in both global and local search. Compared with traditional algorithms, the integration of the two algorithms endows EDEAdam with powerful capabilities to handle different classification problems. Experimental results prove that EDEAdam not only exhibits improved global and local search capabilities, but also achieves a fast convergence speed.
Static and dynamic clustering algorithms are a fundamental tool in any machine learning library. Most of the efforts in developing dynamic machine learning and data mining algorithms have been ...focusing on the sliding window model or more simplistic models. However, in many real-world applications one might need to deal with arbitrary deletions and insertions. For example, one might need to remove data items that are not necessarily the oldest ones, because they have been flagged as containing inappropriate content or due to privacy concerns. Clustering trajectory data might also require to deal with more general update operations. We develop a <inline-formula><tex-math notation="LaTeX">(2+\epsilon)</tex-math> <mml:math><mml:mrow><mml:mo>(</mml:mo><mml:mn>2</mml:mn><mml:mo>+</mml:mo><mml:mi>ε</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href="chan-ieq2-3023020.gif"/> </inline-formula>-approximation algorithm for the <inline-formula><tex-math notation="LaTeX">k</tex-math> <mml:math><mml:mi>k</mml:mi></mml:math><inline-graphic xlink:href="chan-ieq3-3023020.gif"/> </inline-formula>-center clustering problem with "small" amortized cost under the fully dynamic adversarial model. In such a model, points can be added or removed arbitrarily, provided that the adversary does not have access to the random choices of our algorithm. The amortized cost of our algorithm is poly-logarithmic when the ratio between the maximum and minimum distance between any two points in input is bounded by a polynomial, while <inline-formula><tex-math notation="LaTeX">k</tex-math> <mml:math><mml:mi>k</mml:mi></mml:math><inline-graphic xlink:href="chan-ieq4-3023020.gif"/> </inline-formula> and <inline-formula><tex-math notation="LaTeX">\epsilon</tex-math> <mml:math><mml:mi>ε</mml:mi></mml:math><inline-graphic xlink:href="chan-ieq5-3023020.gif"/> </inline-formula> are constant. Furthermore, we significantly improve the memory requirement of our fully dynamic algorithm, although at the cost of a worse approximation ratio of <inline-formula><tex-math notation="LaTeX">4 +\epsilon</tex-math> <mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>+</mml:mo><mml:mi>ε</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href="chan-ieq6-3023020.gif"/> </inline-formula>. Our theoretical results are complemented with an extensive experimental evaluation on dynamic data from Twitter, Flickr, as well as trajectory data, demonstrating the effectiveness of our approach.
Ramps events are a significant source of uncertainty in wind power generation. Developing statistical models from historical data for wind power ramps is important for designing intelligent ...distribution and market mechanisms for a future electric grid. This requires robust detection schemes for identifying wind ramps in data. In this paper, we propose an optimal detection technique for identifying wind ramps for large time series. The technique relies on defining a family of scoring functions associated with any rule for defining ramps on an interval of the time series. A dynamic programming recursion is then used to find all such ramp events. Identified wind ramps are used to propose a new stochastic framework to characterize wind ramps. Extensive statistical analysis is performed based on this framework, characterizing ramping duration and rates as well as other key features needed for evaluating the impact of wind ramps in the operation of the power system. In particular, evaluation of new ancillary services and wind ramp forecasting can benefit from the proposed approach.
Data clustering is a well-known data analysis technique for organizing unlabeled datapoints into clusters on the basis of similarity measures. The real-world applications of data clustering include ...bioinformatics, vector quantization, data mining, geographical information systems, pattern recognition, image processing, and wireless sensors. The data in a cluster are similar (minimizing the intra-cluster distance) and differ from the data in other clusters (maximizing the inter-cluster distance). The cluster problem has been proven to be NP-hard, but can be solved using meta-heuristic algorithms, such as ant colony optimization, genetic algorithms, gravitational search algorithm (GSA), and particle swarm optimization (PSO). This paper proposes a memetic clustering algorithm with efficient search and fast convergence, respectively, based on PSO and GSA, called the memetic particle gravitation optimization (MPGO) algorithm. The two main mechanisms of MPGO are hybrid operation and diversity enhancement. The former involves the exchange of individuals from two subpopulations after a predefined number of function evaluations (FEs), whereas the latter involves an enhancement operator, which is similar to the crossover process of differential evolution, for enhancing the diversity of each system. Individuals from the PSO and GSA systems are selected for the exchange of solutions by using the roulette-wheel approach. The performance of the proposed algorithm was evaluated on 52 benchmark test functions, six UCI machine learning benchmarks, and image segmentation of six well-known images. A comparison with existing algorithms verified the superior performance of the proposed algorithm in terms of a fitness value, an accuracy rate, and a peak signal-to-noise ratio.
With the advent of cloud manufacturing (CMfg), more and more services in CMfg platforms may provide the same functionality but differ in performance. In order to insure the manufacturing cloud to ...match the complicated task requirements, composited CMfg service optimal selection (CCSOS) is becoming increasingly important. This study proposes a new approach for such CCSOS problems, the so-called hybrid artificial bee colony (HABC) algorithm, which employs both the probabilistic model of Archimedean copula estimation of distribution algorithm (ACEDA) and the chaos operators of global best-guided artificial bee colony to generate the offspring individuals with consideration of quality of service (QoS) and CMfg environment. Different-scale CCSOS problems are adopted to evaluate the performance of the proposed HABC. Experimental results have shown that the HABC can find better solutions compared with such algorithms as genetic algorithm, particle swarm optimization, and basic artificial bee colony algorithm.
The Harmony Search Algorithm (HSA) is a swarm intelligence optimization algorithm which has been successfully applied to a broad range of clustering applications, including data clustering, text ...clustering, fuzzy clustering, image processing, and wireless sensor networks. We provide a comprehensive survey of the literature on HSA and its variants, analyze its strengths and weaknesses, and suggest future research directions.
Medical technological advancements have led to the creation of various large datasets with numerous attributes. The presence of redundant and irrelevant features in datasets negatively influences ...algorithms and leads to decreases in the performance of the algorithms. Using effective features in data mining and analyzing tasks such as classification can increase the accuracy of the results and relevant decisions made by decision-makers using them. This increase can become more acute when dealing with challenging, large-scale problems in medical applications. Nature-inspired metaheuristics show superior performance in finding optimal feature subsets in the literature. As a seminal attempt, a wrapper feature selection approach is presented on the basis of the newly proposed Aquila optimizer (AO) in this work. In this regard, the wrapper approach uses AO as a search algorithm in order to discover the most effective feature subset. S-shaped binary Aquila optimizer (SBAO) and V-shaped binary Aquila optimizer (VBAO) are two binary algorithms suggested for feature selection in medical datasets. Binary position vectors are generated utilizing S- and V-shaped transfer functions while the search space stays continuous. The suggested algorithms are compared to six recent binary optimization algorithms on seven benchmark medical datasets. In comparison to the comparative algorithms, the gained results demonstrate that using both proposed BAO variants can improve the classification accuracy on these medical datasets. The proposed algorithm is also tested on the real-dataset COVID-19. The findings testified that SBAO outperforms comparative algorithms regarding the least number of selected features with the highest accuracy.
Background: Previous studies have suggested that the axon guidance proteins Slit1 and Slit2 co-operate to establish the optic chiasm in its correct position at the ventral diencephalic midline. This ...is based on the observation that, although both Slit1 and Slit2 are expressed around the ventral midline, mice defective in either gene alone exhibit few or no axon guidance defects at the optic chiasm whereas embryos lacking both Slit1 and Slit2 develop a large additional chiasm anterior to the chiasm's normal position. Here we used steerable-filters to quantify key properties of the population of axons at the chiasm in wild-type, Slit1 super( -/- ), Slit2 super( -/- ) and Slit1 super( -/- ) Slit2 super( -/- ) embryos. Results: We applied the steerable-filter algorithm successfully to images of embryonic retinal axons labelled from a single eye shortly after they have crossed the midline. We combined data from multiple embryos of the same genotype and made statistical comparisons of axonal distributions, orientations and curvatures between genotype groups. We compared data from the analysis of axons with data on the expression of Slit1 and Slit2. The results showed a misorientation and a corresponding anterior shift in the position of many axons at the chiasm of both Slit2 super( -/- ) and Slit1 super( -/- ) Slit2 super( -/- ) mutants. There were very few axon defects at the chiasm of Slit1 super( -/- ) mutants. Conclusions: We found defects of the chiasms of Slit1 super( -/- ) Slit2 super( -/- ) and Slit1 super( -/- ) mutants similar to those reported previously. In addition, we discovered previously unreported defects resulting from loss of Slit2 alone. This indicates the value of a quantitative approach to complex pathway analysis and shows that Slit2 can act alone to control aspects of retinal axon routing across the ventral diencephalic midline.
Algorithms have risen to become one, if not the central technology for producing, circulating, and evaluating knowledge in multiple societal arenas. In this book, scholars from the social sciences, ...humanities, and computer science argue that this shift has, and will continue to have, profound implications for how knowledge is produced and what and whose knowledge is valued and deemed valid. To attend to this fundamental change, the authors propose the concept of algorithmic regimes and demonstrate how they transform the epistemological, methodological, and political foundations of knowledge production, sensemaking, and decision-making in contemporary societies. Across sixteen chapters, the volume offers a diverse collection of contributions along three perspectives on algorithmic regimes: the methods necessary to research and design algorithmic regimes, the ways in which algorithmic regimes reconfigure sociotechnical interactions, and the politics engrained in algorithmic regimes.
The purpose of this paper is to propose a new hybrid metaheuristic to solve the problem of feature selection. Feature selection problem is the process of finding the most relevant subset based on ...some criteria. A hybrid metaheuristic is a new trend in the development of optimization algorithms. In this paper, two different hybrid models are designed based on spotted hyena optimization (SHO) for feature selection problem. The SHO algorithm can find the optimal or nearly optimal feature subset in the feature space to minimize the given fitness function. In the first model, the simulated annealing algorithm (SA) is embedded in the SHO algorithm (called SHOSA-1) to enhance the optimal solution found by the SHO algorithm after each iteration. In the second model, SA enhances the final solution obtained by the SHO algorithm (called SHOSA-2). The performance of these methods is evaluated in 20 datasets in the UCI repository. The experiments show that SHOSA-1 performs better than the native algorithm and SHOSA-2. And then, SHOSA-1 is compared with six state-of-the-art optimization algorithms. The experimental results confirm that SHOSA-1 improves the classification accuracy and reduces the number of selected features compared with other wrapper-based optimization algorithms. That proves the excellent performance of SHOSA-1 in spatial search and feature attribute selection.