Due to the increasing integration of distributed energy resources (DERs) into the national grid, research is needed to understand high DER penetration in relation to grid reliability and stability. ...Emulation-based research is a viable option to consider hardware intensive investigations. Indeed, DER emulation-based research has advantages over DER software-based simulation and real DER implementation research. Some of these advantages are more realistic performance and boundaries and considerably less expense, higher safety, and modularity. While the downside of DER emulation-based research is the complexity of modeling large scale grid systems. This paper provides a rationalization for the selection of an emulation-based DER laboratory and description of the DER laboratory's capabilities. Additionally, guidelines for running experiments are discussed and two approaches, power scaling using rescaling coefficients and system sizing using Multi-Port Thevenin Equivalence (MPTE), are proposed to emulate realistic complex interconnected DER grid connected systems.
We propose a new distributed optimization algorithm for solving a class of constrained optimization problems in which (a) the objective function is separable (i.e., the sum of local objective ...functions of agents), (b) the optimization variables of distributed agents, which are subject to nontrivial local constraints, are coupled by global constraints, and (c) only noisy observations are available to estimate (the gradients of) local objective functions. In many practical scenarios, agents may not be willing to share their optimization variables with others. For this reason, we propose a distributed algorithm that does not require the agents to share their optimization variables with each other; instead, each agent maintains a local estimate of the global constraint functions and shares the estimate only with its neighbors. These local estimates of constraint functions are updated using a consensus-type algorithm, while the local optimization variables of each agent are updated using a first-order method based on noisy estimates of gradient. We prove that, when the agents adopt the proposed algorithm, their optimization variables converge with probability 1 to an optimal point of an approximated problem based on the penalty method.
This letter considers problems related to suppressing epidemic spread over networks given limited curing resources. The spreading dynamic is captured by a susceptible-infected-susceptible model. The ...epidemic threshold and recovery speed are determined by the contact network structure and the heterogeneous infection and curing rates. We develop a distributed algorithm that can be used for allocating curing resources to meet three potential objectives: 1) minimize total curing cost while preventing an epidemic; 2) maximize recovery speed given sufficient curing resources; or 3) given insufficient curing resources, limit the size of an endemic state. The distributed algorithm is of the Jacobi type, and converges geometrically. We provide an upper bound on the convergence rate that depends on the structure and infection rates of the underlying network. Numerical simulations illustrate the efficiency and scalability of our distributed algorithm.
Today's cities generate tremendous amounts of data, thanks to a boom in affordable smart devices and sensors. The resulting big data creates opportunities to develop diverse sets of context-aware ...services and systems, ensuring smart city services are optimized to the dynamic city environment. Critical resources in these smart cities will be more rapidly deployed to regions in need, and those regions predicted to have an imminent or prospective need. For example, crime data analytics may be used to optimize the distribution of police, medical, and emergency services. However, as smart city services become dependent on data, they also become susceptible to disruptions in data streams, such as data loss due to signal quality reduction or due to power loss during data collection. This paper presents a dynamic network model for improving service resilience to data loss. The network model identifies statistically significant shared temporal trends across multivariate spatiotemporal data streams and utilizes these trends to improve data prediction performance in the case of data loss. Dynamics also allow the system to respond to changes in the data streams such as the loss or addition of new information flows. The network model is demonstrated by city-based crime rates reported in Montgomery County, MD, USA. A resilient network is developed utilizing shared temporal trends between cities to provide improved crime rate prediction and robustness to data loss, compared with the use of single city-based auto-regression. A maximum improvement in performance of 7.8 % for Silver Spring is found and an average improvement of 5.6 % among cities with high crime rates. The model also correctly identifies all the optimal network connections, according to prediction error minimization. City-to-city distance is designated as a predictor of shared temporal trends in crime and weather is shown to be a strong predictor of crime in Montgomery County.
We study the problem of minimizing the (time) average security costs in large networks/systems comprising many interdependent subsystems, where the state evolution is captured by a ...susceptible-infected-susceptible (SIS) model. The security costs reflect security investments, economic losses and recovery costs from infections and failures following successful attacks. We show that the resulting optimization problem is nonconvex and propose a suite of algorithms - two based on convex relaxations, and the other two for finding a local minimizer, based on a reduced gradient method and sequential convex programming. Also, we provide a sufficient condition under which the convex relaxations are exact and, hence, an optimal solution of the original problem can be recovered. Numerical results are provided to validate our analytical results and to demonstrate the effectiveness of the proposed algorithms.
Entanglement routing in near-term quantum networks consists of choosing the optimal sequence of short-range entanglements to combine through swapping operations to establish end-to-end entanglement ...between two distant nodes. Similar to traditional routing technologies, a quantum routing protocol uses network information to choose the best paths to satisfy a set of end-to-end entanglement requests. However, in addition to network state information, a quantum routing protocol must also take into account the requested entanglement fidelity, the probabilistic nature of swapping operations, and the short lifetime of entangled states. In this work, we formulate a practical entanglement routing problem and analyze and categorize the main approaches to address it, drawing comparisons to, and inspiration from, classical network routing strategies where applicable. We classify and discuss the studied quantum routing schemes into reactive, proactive, opportunistic, and virtual routing
We investigate the coexistence of clock synchronization protocols with quantum signals in a common single-mode optical fiber. By measuring optical noise between 1500 nm to 1620 nm we demonstrate a ...potential for up to 100 quantum, 100 GHz wide channels coexisting with the classical synchronization signals. Both "White Rabbit" and pulsed laser-based synchronization protocols were characterized and compared. We establish a theoretical limit of the fiber link length for coexisting quantum and classical channels. The maximal fiber length is below approximately 100 km for off-the-shelf optical transceivers and can be significantly improved by taking advantage of quantum receivers.
We study the problem of minimizing the (time) average security costs in large systems comprising many interdependent subsystems, where the state evolution is captured by a ...susceptible-infected-susceptible (SIS) model. The security costs reflect security investments, economic losses and recovery costs from infections and failures following successful attacks. However, unlike in existing studies, we assume that the underlying dependence graph is only weakly connected, but not strongly connected. When the dependence graph is not strongly connected, existing approaches to computing optimal security investments cannot be applied. Instead, we show that it is still possible to find a good solution by perturbing the problem and establishing necessary continuity results that then allow us to leverage the existing algorithms.
Providing differentiated services to meet the unique requirements of different use cases is a major goal of the fifth generation (5G) telecommunication networks and will be even more critical for ...future 6G systems. Fulfilling this goal requires the ability to assure quality of service (QoS) end to end (E2E), which remains a challenge. A key factor that makes E2E QoS assurance difficult in a telecommunication system is that access networks (ANs) and core networks (CNs) manage their resources autonomously. So far, few results have been available that can ensure E2E QoS over autonomously managed ANs and CNs. Existing techniques rely predominately on each subsystem to meet static local QoS budgets with no recourse in case any subsystem fails to meet its local budgets and, hence will have difficulty delivering E2E assurance. Moreover, most existing distributed optimization techniques that can be applied to assure E2E QoS over autonomous subsystems require the subsystems to exchange sensitive information such as their local decision variables. This paper presents a novel framework and a distributed algorithm that can enable ANs and CNs to autonomously "cooperate" with each other to dynamically negotiate their local QoS budgets and to collectively meet E2E QoS goals by sharing only their estimates of the global constraint functions, without disclosing their local decision variables. We prove that this new distributed algorithm converges to an optimal solution almost surely, and also present numerical results to demonstrate that the convergence occurs quickly even with measurement noise.
We propose a new distributed optimization algorithm for solving a class of constrained optimization problems in which (a) the objective function is separable (i.e., the sum of local objective ...functions of agents), (b) the optimization variables of distributed agents, which are subject to nontrivial local constraints, are coupled by global constraints, and (c) only noisy observations are available to estimate (the gradients of) local objective functions. In many practical scenarios, agents may not be willing to share their optimization variables with others. For this reason, we propose a distributed algorithm that does not require the agents to share their optimization variables with each other; instead, each agent maintains a local estimate of the global constraint functions and share the estimate only with its neighbors. These local estimates of constraint functions are updated using a consensus-type algorithm, while the local optimization variables of each agent are updated using a first-order method based on noisy estimates of gradient. We prove that, when the agents adopt the proposed algorithm, their optimization variables converge with probability 1 to an optimal point of an approximated problem based on the penalty method.