Abstract
Modular Multilevel Converter (MMC) is widely studied and used because of its own advantages such as high output level number, low switching frequency and good waveform quality. The voltage ...equalization problem of sub-module capacitors is one of the key research directions of MMC. In the paper, a sorting method of inter-group comparison in two groups, called OF sorting method, is proposed first. The maximum time complexity of this method of sorting is N-1, but the use of this method is limited. Therefore, a new MMC hybrid sorting voltage equalization algorithm is proposed on this basis. This paper proposes a capacitor voltage equalization control strategy based on the combination of improved equalization sorting algorithm and improved insertion sorting algorithm, which reduces the computational effort by monitoring the sub-module capacitance voltage in real time, setting the dispersion index between sub-module voltages, and then controlling the opening and holding of the sorting module to reduce the number of elements to be ranked one by one. Finally, the MMC model is built in Matlab/Simulink. The simulation results show that the improved equalization control strategy can effectively reduce the computational effort and reduce the switching frequency of sub-modules on the basis of better maintaining the capacitor voltage balance.
A look at the elegance and efficiency of biological machines readily reveals that Nature masters the full gamut of chemical interactions to compose masterpieces of the living world. The present ...analysis singles out metal coordination for the actuation of nanomechanical motion. According to our analysis, metal coordination has a manifold of rewards, putting it primo loco in opportunities for putting nanomechanical systems into action: (i) its strength and dynamics can be properly modulated and fine-tuned by the choice of metal, redox state, and ligand(s), (ii) the high directionality of the interaction allows reliable design, and (iii) the emergence of novel self-sorting algorithms allows multiple of these interactions to be working in parallel. On top of all these advantages, intermolecular metal-ion translocation is a well-known factor in biological signaling. These benefits have recently proven their usefulness in the operation of networked devices and in overcoming the limitations of traditional stand-alone molecular systems.
In the process of learning the data structure, it is very necessary to master the sorting algorithm, and in the program design, the sorting algorithm is applied frequently. Based on the importance of ...sorting algorithms, this paper will carefully compare the characteristics of different algorithms, starting with the work efficiency, algorithm implementation, basic ideas, sorting methods and other aspects, and draw conclusions so as to better select sorting algorithms.
Since obtaining data labels is a time-consuming and laborious task, unsupervised feature selection has become a popular feature selection technique. However, the current unsupervised feature ...selection methods are facing three challenges: (1) they rely on a fixed similarity matrix derived from the original data, which will affect their performance; (2) due to the limitation of sparsity, they can only obtain sub-optimal solutions; (3) they have high computational complexity and cannot handle large-scale data. To solve this dilemma, we propose a fast unsupervised feature selection algorithm with bipartite graph and Formula Omitted-norm constraint (BGCFS). We use the original data and the selected anchors to construct an adaptive bipartite graph in the subspace, and apply the Formula Omitted-norm constraint to the projection matrix for feature selection. In this way, we can update the adaptive bipartite graph and the projection matrix simultaneously, and we can get the feature subset directly, without sorting the features. In addition, we propose an iterative algorithm that can solve the proposed problem globally to obtain a closed-form solution, and we provide a strict proof of convergence for it. Experiments on eight real data sets with different scales show that our method can select more valuable feature subsets more quickly.
The order-of-addition (OofA) experiment has received a great deal of attention in the recent literature. The primary goal of the OofA experiment is to identify the optimal order in a sequence of m ...components. All the existing methods are model-dependent and are limited to small number of components. The appropriateness of the resulting optimal order heavily depends on (a) the correctness of the underlying assumed model, and (b) the goodness of model fitting. Moreover, these methods are not applicable to deal with large m (e.g.,
). With this in mind, this article proposes an efficient adaptive methodology, building upon the quick-sort algorithm, to explore the optimal order without any model specification. Compared to the existing work, the run sizes of the proposed method needed to achieve the optimal order are much smaller. Theoretical supports are given to illustrate the effectiveness of the proposed method. The proposed method is able to obtain the optimal order for large m (e.g.,
). Numerical experiments are used to demonstrate the effectiveness of the proposed method.
The visibility of news and politics in a Facebook newsfeed depends on the actions of a diverse set of actors: users, their friends, content publishers such as news organizations, advertisers, and ...algorithms. The focus of this paper is on untangling the role of this last actor from the others. We ask, how does Facebook algorithmically infer what users are interested in, and how do interest inferences shape news exposure? We weave together survey data and interest categorization data from participants' Facebook accounts to audit the algorithmic interest classification system on Facebook. These data allow us to model the role of algorithmic inference in shaping content exposure. We show that algorithmic 'sorting out' of users has consequences for who is exposed to news and politics on Facebook. People who are algorithmically categorized as interested in news or politics are more likely to attract this kind of content into their feeds - above and beyond their self-reported interest in civic content.
•A data-driven framework is proposed to optimize the sizing of a hybrid energy system.•A modified NSGA-II based on reinforcement learning is utilized to obtain Pareto set.•CRITIC-TOPSIS is used to ...decide the weight of objectives and select the best solution.•A optimal system with LCOE of 0.226 $/kWh, LPSP of 4.01% and PAR of 2.15% is obtained.
This paper proposes a data-driven two-stage multi-criteria decision-making (MCDM) framework to investigate the optimal configuration of a stand-alone wind/PV/hydrogen system. In the first stage, a modified non-dominated sorting genetic algorithm (NSGA)-II based on reinforcement learning is utilized to determine a set of Pareto solutions. The objectives considered are to minimize the levelized cost of energy (LCOE), the loss of power supply possibility (LPSP) and the power abandonment rate (PAR), simultaneously. In the second stage, the Criteria Importance Though Intercrieria Correlation (CRITIC) method is utilized to determine the weight of the three objectives, while the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) approach is employed to select the unique best solution from Pareto solutions. To verify the effectiveness, the framework is applied to the wind/PV/hydrogen system located in Aksay Kazak Autonomous County, Gansu Province, China to meet an off-grid industrial park’s load demand of 1603 kWh/day and peak load of 117.17 kW. The result states that the optimal system, which consists of 83.2 kW PV panels, 160 kW wind turbines, 20 kW fuel cells, 54 kW electrolyzers and 450 m3 hydrogen storage tanks, owns the LCOE of 0.226 $/kWh, the LPSP of 4.01% and the PAR of 2.15%.
Abstract
During the actual operation of the power system, various situations will be encountered. A slight disturbance will generate a relatively large transient energy in the system. Estimating the ...loss and reducing the transient energy well is a very worthwhile research topic. In this paper, we discussed the charging and discharging capability of supercapacitors, proposed a scheme of using parallel supercapacitors to absorb the transient energy generated after grid faults, established a supercapacitor capacity optimization model based on economy and absorption effect, used the NSGA-II Algorithm to find the size of the optimum supercapacitor capacity, and finally carried out modeling and simulating in MATLAB/SIMULINK to verify the absorption effect of the optimal capacity configuration on transient energy.
One of the major distinguishing features of the dynamic multiobjective optimization problems (DMOPs) is that optimization objectives will change over time, thus tracking the varying Pareto-optimal ...front becomes a challenge. One of the promising solutions is reusing "experiences" to construct a prediction model via statistical machine learning approaches. However, most existing methods neglect the nonindependent and identically distributed nature of data to construct the prediction model. In this paper, we propose an algorithmic framework, called transfer learning-based dynamic multiobjective evolutionary algorithm (EA), which integrates transfer learning and population-based EAs to solve the DMOPs. This approach exploits the transfer learning technique as a tool to generate an effective initial population pool via reusing past experience to speed up the evolutionary process, and at the same time any population-based multiobjective algorithms can benefit from this integration without any extensive modifications. To verify this idea, we incorporate the proposed approach into the development of three well-known EAs, nondominated sorting genetic algorithm II, multiobjective particle swarm optimization, and the regularity model-based multiobjective estimation of distribution algorithm. We employ 12 benchmark functions to test these algorithms as well as compare them with some chosen state-of-the-art designs. The experimental results confirm the effectiveness of the proposed design for DMOPs.
Benefiting from the real-time processing ability of edge computing, computing tasks requested by smart devices in the Internet of Things are offloaded to edge computing devices (ECDs) for ...implementation. However, ECDs are often overloaded or underloaded with disproportionate resource requests. In addition, during the process of task offloading, the transmitted information is vulnerable, which can result in data incompleteness. In view of this challenge, a blockchain-enabled computation offloading method, named BeCome, is proposed in this article. Blockchain technology is employed in edge computing to ensure data integrity. Then, the nondominated sorting genetic algorithm III is adopted to generate strategies for balanced resource allocation. Furthermore, simple additive weighting and multicriteria decision making are utilized to identify the optimal offloading strategy. Finally, performance evaluations of BeCome are given through simulation experiments.