Power hardware-in-the-loop (PHIL) technology allows for the testing of physical equipment including power electronic converters in a simulation environment that closely mirrors the reality of the ...electric power grid. A challenge involved in PHIL systems is the design of the interface between the digital network model and physical equipment as this has a significant influence on stability of the real-time simulation. While a small time step supports stability and accuracy, its lower limit is given by the real-time constraint. The proposed multirate partitioning (MRP) interface addresses this issue by using a rather small time step at the hardware-software interconnection and then employing a staged adaptation of the time step within the digital model in accordance with the real-time constraint. Thus, multiple rates are used. The Nyquist stability criterion confirms enhanced stability and bandwidth by the MRP compared with single-rate (SR) counterparts. Moreover, a PHIL test of two parallel photovoltaic converters feeding a low-voltage network reveals different behaviors depending on the interface. Using the MRP, the waveforms more closely track the real-world curves of active and reactive power with the relative accuracy increasing with the speed of the transients. Informed decision making regarding the integration of renewables in the grid is so supported.
This article presents a piecewise linear approximation computation (PLAC) method for all nonlinear unary functions, which is an enhanced universal and error-flattened piecewise linear (PWL) ...approximation approach. Compared with the previous methods, PLAC features two main parts, an optimized segmenter to seek the minimum number of segments under the predefined software maximum absolute error (MAE), raising the segmentation performance to the highest theoretical level for logarithm, and a novel quantizer to completely simulate the hardware behavior and determine the required bit width and <inline-formula> <tex-math notation="LaTeX">{\text {MAE}}_{c} </tex-math></inline-formula> (MAE in circuits) for hardware implementation. In addition, the hardware architecture is also improved by simplifying the indexing logic, leading to nonredundant hardware overhead. The ASIC implementation results reveal that the proposed PLAC can improve all metrics without any compromise. Compared with the state-of-the-art methods, when computing logarithmic function, PLAC reduces 2.80% area, 3.77% power consumption, and 1.83% <inline-formula> <tex-math notation="LaTeX">{\text {MAE}}_{c} </tex-math></inline-formula> with the same delay; when approximating hyperbolic tangent function, PLAC reduces 6.25% area, 4.31% power consumption, and 18.86% <inline-formula> <tex-math notation="LaTeX">{\text {MAE}}_{c} </tex-math></inline-formula> with the same delay; when evaluating sigmoid function, PLAC reduces 16.50% area, 4.78% power consumption with the same delay, and <inline-formula> <tex-math notation="LaTeX">{\text {MAE}}_{c} </tex-math></inline-formula>; and when calculating softsign function, PLAC reduces 17.28% area, 11.34% power consumption, 12.50% delay, and 33.28% <inline-formula> <tex-math notation="LaTeX">{\text {MAE}}_{c} </tex-math></inline-formula>.
The Internet of Things (IoT) is a global ecosystem of information and communication technologies aimed at connecting any type of object (thing), at any time, and in any place, to each other and to ...the Internet. One of the major problems associated with the IoT is the heterogeneous nature of such deployments; this heterogeneity poses many challenges, particularly, in the areas of security and privacy. Specifically, security testing and analysis of IoT devices is considered a very complex task, as different security testing methodologies, including software and hardware security testing approaches, are needed. In this paper, we propose an innovative security testbed framework targeted at IoT devices. The security testbed is aimed at testing all types of IoT devices, with different software/hardware configurations, by performing standard and advanced security testing. Advanced analysis processes based on machine learning algorithms are employed in the testbed in order to monitor the overall operation of the IoT device under test. The architectural design of the proposed security testbed along with a detailed description of the testbed implementation is discussed. The testbed operation is demonstrated on different IoT devices using several specific IoT testing scenarios. The results obtained demonstrate that the testbed is effective at detecting vulnerabilities and compromised IoT devices.
In the context of the ‘selfish-mine’ strategy proposed by Eyal and Sirer, we study the effect of communication delay on the evolution of the Bitcoin blockchain. First, we use a simplified Markov ...model that tracks the contrasting states of belief about the blockchain of a small pool of dishonest miners and the ‘rest of the community’ to establish that the use of block-hiding strategies, such as selfish-mine, causes the rate of production of orphan blocks to increase. Then we use a spatial Poisson process model to study values of Eyal and Sirer’s parameter γ, which denotes the proportion of the honest community that mines on a previously-secret block released by the pool in response to the mining of a block by the honest community. Finally, we use discrete-event simulation to study the behaviour of a network of Bitcoin miners, a proportion of which is colluding in using the selfish-mine strategy, under the assumption that there is a delay in the communication of information between miners. The models indicate that both the dishonest and the honest miners were worse off than they would have been if no dishonest mining were present, and that it is possible for the mining community to detect block-hiding behaviour, such as that used in selfish-mine, by monitoring the rate of production of orphan blocks.
We consider the uplink of a cell-free (CF) massive multi-input multi-output (MIMO) system with superimposed pilot (SP) transmission, wherein user equipments (UEs) superimpose low-powered pilots onto ...data signals. This is unlike regular pilot (RP) transmission, where data and pilots use orthogonal spectral resources. Our CF mMIMO system has hardware impairments which occur due to i) low-quality radio frequency (RF) chains at the access points (APs) and UEs; and ii) dynamic analog-to-digital converter (ADC) architecture at the APs, which enables each RF chain to be connected to different resolution ADC. We derive a closed-form spectral efficiency (SE) expression for this CF system, wherein UEs observe practical spatially-correlated Rician-faded channels. The derived lower-bound is generic, and reduces to the ones in the existing CF mMIMO SP works, which have only considered ideal hardware . Using this lower-bound, we optimally balance pilot and data transmit powers to maximize the SE. We analytically show that the optimal power balance is insensitive to AP impairments, but sensitive to that of UEs. We numerically show that SP can provide a higher SE than RP for low-to-severe hardware impairment levels, when supported by dynamic ADC architecture at the APs. With low resolution ADCs, RP always outperforms SP. The RP is also shown to be suitable for low UE speeds.
•The need for improved frequency response in future power systems is identified.•An investigation into how energy storage can fulfil this need is presented.•New experimental methods have been ...developed, using power hardware in the loop.•Analysis of high-resolution frequency data from the British electricity system.•Case study analysis of a new frequency response service designed for energy storage.
Energy Storage Systems (ESS) are expected to play a significant role in regulating the frequency of future electric power systems. Increased penetration of renewable generation, and reduction in the inertia provided by large synchronous generators, are likely to increase the severity and regularity of frequency events in synchronous AC power systems. By supplying or absorbing power in response to deviations from the nominal frequency and imbalances between supply and demand, the rapid response of ESS will provide a form of stability which cannot be matched by conventional network assets. However, the increased complexity of ESS operational requirements and design specifications introduces challenges when it comes to the realisation of their full potential through existing frequency response service markets: new service markets will need to be designed to take advantage of the capabilities of ESS. This paper provides new methods to analyse and assessing the performance of ESS within existing service frameworks, using real-time network simulation and power hardware in the loop. These methods can be used to introduce improvements in existing services and potentially create new ones. Novel statistical techniques have been devised to quantify the design and operational requirements of ESS providing frequency regulation services. These new techniques are demonstrated via an illustrative service design and high-resolution frequency data from the Great Britain transmission system.
The number of connected embedded edge computing Internet of Things (IoT) devices has been increasing over the years, contributing to the significant growth of available data in different scenarios. ...Thereby, machine learning algorithms arise to enable task automation and process optimization based on those data. However, due to some learning methods' computational complexity implementing geometric classifiers, it is a challenge to map these on embedded systems or devices with limited resources in size, processing, memory, and power, to accomplish the desired requirements. This hampers the applicability of these methods to complex industrial embedded edge applications. This work evaluates strategies to reduce classifiers' implementation costs based on the CHIP-clas model, independent of hyperparameter tuning and optimization algorithms. The proposal aims to evaluate the tradeoff between numerical precision and model performance and analyze the hardware implementations of a distance-based classifier. Two 16-b floating-point formats were compared to the 32-b floating-point precision implementation. Also, a new hardware architecture was developed and then compared to the state-of-the-art reference. The results indicate that the model is robust to low precision computation, providing statistically equivalent results compared to the baseline model, also pointing out statistically equivalent performance and a global speed-up factor of approx 4.39 in processing time.
This article presents a resistive random access memory (ReRAM)-based convolutional neural network (CNN) accelerator with a new analog layer normalization (ALN) technique. The proposed ALN can be used ...to effectively reduce the effect of the conductance variation in ReRAM devices by normalizing the outputs of the vector-matrix multiplication (VMM) in the charge domain. The ALN achieves high energy and hardware efficiencies because it directly processes the normalization of the VMM outputs without storing their values in memory and is merged into the neuron circuit of the accelerator. To verify the effect of the ALN through experiments, a VMM accelerator that consists of two 25 × 25 sized ReRAM arrays and peripheral circuits with ALN is used for a convolution layer with digital signal processing in a field programmable gate array. The MNIST dataset is used to train and inference a CNN employing two VMM accelerators that work as convolution layers in a pipelined manner. Despite the conductance variation of the ReRAM devices, the ALN successfully stabilizes the output distribution of the convolution layer, which improves the classification accuracy of the network. A final classification accuracy for the MNIST and Fashion-MNIST datasets of 96.2% and 83.1% is achieved, respectively, with an energy efficiency of 9.94 tera-operations per second per Watt.