In today's cyber-enabled smart grids, high penetration of uncertain renewables, purposeful manipulation of meter readings, and the need for wide-area situational awareness, call for fast, accurate, ...and robust power system state estimation. The least-absolute-value (LAV) estimator is known for its robustness relative to the weighted least-squares one. However, due to nonconvexity and non-smoothness, existing LAV solvers based on linear programming are typically slow and, hence, inadequate for real-time system monitoring. This paper, develops two novel algorithms for efficient LAV estimation, which draw from recent advances in composite optimization. The first is a deterministic linear proximal scheme that handles a sequence of (5 ~ 10 in general) convex quadratic problems, each efficiently solvable either via off-the-shelf toolboxes or through the alternating direction method of multipliers. Leveraging the sparse connectivity inherent to power networks, the second scheme is stochastic and updates only afew entries of the complex voltage state vector per iteration. In particular, when voltage magnitude and (re)active power flow measurements are used only, this number reduces to one or two regardless of the number of buses in the network. This computational complexity evidently scales well to largesize power systems. Furthermore, by carefully mini-batching the voltage and power flow measurements, accelerated implementation of the stochastic iterations becomes possible. The developed algorithms are numerically evaluated using a variety of benchmark power networks. Simulated tests corroborate that improved robustness can be attained at comparable or markedly reduced computation times for medium- or large-size networks relative to existing alternatives.
Eventually lattice-linear algorithms Gupta, Arya Tanmay; Kulkarni, Sandeep S.
Journal of parallel and distributed computing,
03/2024, Volume:
185
Journal Article
By identifying a local property which structurally classifies any edge, we show that the family of generalized Petersen graphs can be recognized in linear time.
The task of recovering a low-rank matrix from its noisy linear measurements plays a central role in computational science. Smooth formulations of the problem often exhibit an undesirable phenomenon: ...the condition number, classically defined, scales poorly with the dimension of the ambient space. In contrast, we here show that in a variety of concrete circumstances, nonsmooth penalty formulations do not suffer from the same type of ill-conditioning. Consequently, standard algorithms for nonsmooth optimization, such as subgradient and prox-linear methods, converge at a rapid dimension-independent rate when initialized within constant relative error of the solution. Moreover, nonsmooth formulations are naturally robust against outliers. Our framework subsumes such important computational tasks as phase retrieval, blind deconvolution, quadratic sensing, matrix completion, and robust PCA. Numerical experiments on these problems illustrate the benefits of the proposed approach.
It was recently shown that on a large class of important Banach spaces there exist no linear methods which are able to approximate the Hilbert transform from samples of the given function. This ...implies that there is no linear algorithm for calculating the Hilbert transform which can be implemented on a digital computer and which converges for all functions from the corresponding Banach spaces. The present paper develops a much more general framework which also includes non-linear approximation methods. All algorithms within this framework have only to satisfy an axiom which guarantees the computability of the algorithm based on given samples of the function. The paper investigates whether there exists an algorithm within this general framework which converges to the Hilbert transform for all functions in these Banach spaces. It is shown that non-linear methods give actually no improvement over linear methods. Moreover, the paper discusses some consequences regarding the Turing computability of the Hilbert transform and the existence of computational bases in Banach spaces.
Given a finite range space
Σ
=
(
X
,
R
)
, with
N
=
|
X
|
+
|
R
|
, we present two simple algorithms, based on the multiplicative-weight method, for computing a small-size hitting set or set cover of
...Σ
. The first algorithm is a simpler variant of the Brönnimann–Goodrich algorithm but more efficient to implement, and the second algorithm can be viewed as solving a two-player zero-sum game. These algorithms, in conjunction with some standard geometric data structures, lead to near-linear algorithms for computing a small-size hitting set or set cover for a number of geometric range spaces. For example, they lead to
O
(
N
polylog
(
N
)
)
expected-time randomized
O
(1)-approximation algorithms for both hitting set and set cover if
X
is a set of points and
R
a set of disks in
R
2
.
The main goal of group testing is to identify a small number of specific items among a large population of items. In this paper, we consider specific items as positives and inhibitors and ...non-specific items as negatives. In particular, we consider a novel model called group testing with blocks of positives and inhibitors. A test on a subset of items is positive if the subset contains at least one positive and does not contain any inhibitors, and it is negative otherwise. In this model, the input items are linearly ordered, and the positives and inhibitors are subsets of small blocks (at unknown locations) of consecutive items over that order. We also consider two specific instantiations of this model. The first instantiation is that model that contains a single block of consecutive items consisting of exactly known numbers of positives and inhibitors. The second instantiation is the model that contains a single block of consecutive items containing known numbers of positives and inhibitors. Our contribution is to propose efficient encoding and decoding schemes such that the numbers of tests used to identify only positives or both positives and inhibitors are less than the ones in the state-of-the-art schemes. Moreover, the decoding times mostly scale to the numbers of tests that are significantly smaller than the state-of-the-art ones, which scale to both the number of tests and the number of items.
With the recent trends in urban agriculture and climate change, there is an emerging need for alternative plant culture techniques where dependence on soil can be eliminated. Hydroponic and aquaponic ...growth techniques have proven to be viable alternatives, but the lack of efficient and optimal practices for irrigation and nutrient supply limits its applications on a large-scale commercial basis. The main purpose of this research was to develop statistical methods and Machine Learning algorithms to regulate nutrient concentrations in aquaponic irrigation water based on plant needs, for achieving optimal plant growth and promoting broader adoption of aquaponic culture on a commercial scale. One of the key challenges to developing these algorithms is the sparsity of data which requires the use of Bolstered error estimation approaches. In this paper, several linear and non-linear algorithms trained on relatively small datasets using Bolstered error estimation techniques were evaluated, for selecting the best method in making decisions regarding the regulation of nutrients in hydroponic environments. After repeated tests on the dataset, it was decided that Semi-Bolstered Resubstitution Error estimation technique works best in our case using Linear Support Vector Machine as the classifier with the value of penalty parameter set to one. A set of recommended rules have been prescribed as a Decision Support System, using the output of the Machine Learning algorithm, which have been tested against the results of the baseline model. Further, the positive impact of the recommended nutrient concentrationson plant growth in aquaponic environments has been elaborately discussed.
•This study presents a ML based technique for nutrient regulation in coupled aquaponics system.•The performance of linear and non-linear classifiers on small datasets has been studied.•Bolstered error estimate strategies instead of MSE techniques have been prescribed.•The idea of inferencing in data-depleted domains has been addressed.•ML-based recommendations to optimize nutrients have been implemented against baseline model.
The concentration of sheep cheese whey (CW) in water obtained from two Spanish reservoirs, two Spanish rivers, and distilled water has been estimated by combining spectroscopic measurements, obtained ...with light-emitting diodes (LEDs), and linear or non-linear algorithms. The concentration range of CW that has been studied covers from 0 to 25% in weight. Every sample was measured by six different types of LEDs possessing different emission wavelengths (blue, orange, green, pink, white, and UV). 1,800 fluorescence measurements were carried out and used to design different types of models to estimate the concentration of CW in water. The fluorescence spectra provided by the pink LED originated the most accurate mathematical models, with mean square errors lower than 3.3% and 2.5% for the linear and non-linear approaches, respectively. The pink LED combined with the non-linear model, which was an artificial neural network, was further validated through a k-fold cross-validation and an internal validation. It should be noted that the sensor used here has been designed and produced by a 3D printer and has the potential of being implemented in situ for real-time and cost-effective analysis of natural watercourses.
Display omitted
•LEDs to monitor cheese whey amounts in natural bodies of water.•Linear and intelligent algorithms implemented to quantify cheese whey in water.•Cost-effective tool to mitigate the environmental impact of dairy industries.•Broad database in terms of water sources, LEDs, and cheese whey concentration.•Potential in situ implementation of quantifying tool without specialized staff.
•An optimization algorithm was developed for determining the optimal crop distribution.•The algorithm is faster and reaches better results than genetic algorithms.•The low computational time ...requirements allow MOPECO to be used as an online tool.•Deficit irrigation is recommended for improving the profitability of irrigation farms.•The direct solution algorithm may be used in other fields.
The irrigation farms placed in areas of scare water demand methodologies that can increase their profitability via more efficient use of their resources. Determining the combination of factors that maximizes the profitability of any productive process requires the use of optimization methodologies. Traditionally, these types of problems were solved using heuristic methods. However, a direct-solution algorithm would produce faster and more accurate solutions. The aim of this work was to develop a direct-solution algorithm capable of determining the crop planning (area and volume of water per crop) that maximizes the profitability of an irrigation farm. The data required by the algorithm include the total cultivable area of the farm and the amount of available irrigation water as well as the “gross margin vs. irrigation depth” functions of the considered crops. Cultivating one or two crops is the way to reach higher profitability, but this strategy is not suitable from an agricultural point of view (i.e., crop rotation, diseases, weather risks, regulations of agricultural policies, etc.). Due to this algorithm must be compatible with the MOPECO model, a methodology has been developed to allow its implementation in this model. The objective of this software is to maximize the profitability of irrigation farms by incorporating a more efficient use of irrigation water using regulated deficit irrigation techniques. The current version of this model uses genetic algorithms for determining optimal crop planning, which are time consuming. For a hypothetical 100ha farm, considering 10 different crops and 11 scenarios of water availability, the developed algorithm adapted to MOPECO achieved gross margins around 0.5% lower than LINGO, and 1.1% higher than genetic algorithms, decreasing the calculation time requirements by between 50 and 100 and approximately 2000 times, respectively. Another relevant result is the fact that the algorithm may be used manually, by drawing the tangent lines between the gross margin curves, for reaching the optimal combinations of irrigation depth and, indirectly, the cultivable area of each crop. Moreover, the algorithm allows to understand the relationships among crops, which may advise users in the determination of the optimal solution under real conditions. This methodology also highlights the importance of using regulated deficit irrigation techniques when managing irrigation farms with a low supply of irrigation water. The developed algorithm may also be useful in the optimization of other production processes.