This paper considers a distributed optimization problem over a multiagent network, in which the objective function is a sum of individual cost functions at the agents. We focus on the case when ...communication between the agents is described by a directed graph. Existing distributed optimization algorithms for directed graphs require at least the knowledge of the neighbors' out-degree at each agent (due to the requirement of column-stochastic matrices). In contrast, our algorithm requires no such knowledge. Moreover, the proposed algorithm achieves the best known rate of convergence for this class of problems, <inline-formula><tex-math notation="LaTeX"> O(\mu ^k)</tex-math></inline-formula> for <inline-formula><tex-math notation="LaTeX">0<\mu <1</tex-math> </inline-formula>, where <inline-formula><tex-math notation="LaTeX">k</tex-math></inline-formula> is the number of iterations, given that the objective functions are strongly convex and have Lipschitz-continuous gradients. Numerical experiments are also provided to illustrate the theoretical findings.
This paper considers the problem of a leader that seeks to optimally influence the opinions of agents in a directed network through connecting with a limited number of the agents ("direct ...followers"), possibly in the presence of a fixed competing leader. The settings involving a single leader and two competing leaders are unified into a general combinatoric optimization problem, for which two heuristic approaches are developed. The first approach is based on a convex relaxation scheme, possibly in combination with the <inline-formula><tex-math notation="LaTeX">\ell _1</tex-math></inline-formula>-norm regularization technique, and the second is based on a greedy selection strategy. The main technical novelties of this work are in the establishment of supermodularity of the objective function and convexity of its continuous relaxation. The greedy approach is guaranteed to have a lower bound on the approximation ratio sharper than <inline-formula><tex-math notation="LaTeX">(1-1/e)</tex-math></inline-formula>, while the convex approach can benefit from efficient (customized) numerical solvers to have practically comparable solutions possibly with faster computation times. The two approaches can be combined to provide improved results. In numerical examples, the approximation ratio can be made to reach <inline-formula><tex-math notation="LaTeX">{\text{90}\%}</tex-math></inline-formula> or higher depending on the number of direct followers.
We propose a new distributed optimization algorithm for solving a class of constrained optimization problems in which (a) the objective function is separable (i.e., the sum of local objective ...functions of agents), (b) the optimization variables of distributed agents, which are subject to nontrivial local constraints, are coupled by global constraints, and (c) only noisy observations are available to estimate (the gradients of) local objective functions. In many practical scenarios, agents may not be willing to share their optimization variables with others. For this reason, we propose a distributed algorithm that does not require the agents to share their optimization variables with each other; instead, each agent maintains a local estimate of the global constraint functions and shares the estimate only with its neighbors. These local estimates of constraint functions are updated using a consensus-type algorithm, while the local optimization variables of each agent are updated using a first-order method based on noisy estimates of gradient. We prove that, when the agents adopt the proposed algorithm, their optimization variables converge with probability 1 to an optimal point of an approximated problem based on the penalty method.
This letter considers problems related to suppressing epidemic spread over networks given limited curing resources. The spreading dynamic is captured by a susceptible-infected-susceptible model. The ...epidemic threshold and recovery speed are determined by the contact network structure and the heterogeneous infection and curing rates. We develop a distributed algorithm that can be used for allocating curing resources to meet three potential objectives: 1) minimize total curing cost while preventing an epidemic; 2) maximize recovery speed given sufficient curing resources; or 3) given insufficient curing resources, limit the size of an endemic state. The distributed algorithm is of the Jacobi type, and converges geometrically. We provide an upper bound on the convergence rate that depends on the structure and infection rates of the underlying network. Numerical simulations illustrate the efficiency and scalability of our distributed algorithm.
This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual (possibly nonsmooth) objectives of the agents and the constraint set is ...the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and a variation of this algorithm are presented to solve the problem for cost functions that are convex and Lipschitz continuous. The key component of the algorithms is to adjust the subgradient of each agent by an estimate of its corresponding entry in the normalized left Perron eigenvector of the weight matrix. These estimates are obtained locally from an augmented consensus iteration using the same row stochastic weight matrix and requiring very limited global information about the network. Moreover, based on a regularity assumption on the local constraint sets, a unified analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent’s available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms.
This paper studies distributed optimization problems where a network of agents seeks to minimize the sum of their private cost functions. We propose new algorithms based on the distributed ...subgradient method and the finite-time consensus protocol introduced by Sundaram and Hadjicostis (2007). In our first algorithm, the local optimization variables are updated cyclically through a subgradient step while the consensus variables follow a usual consensus protocol periodically interrupted by a predictive consensus estimate reset operation. For convex cost functions with bounded subgradients, this algorithm is guaranteed to converge to a certain range of the optimal value if using a constant step size or to the optimal value if a diminishing step size is in place. For differentiable cost functions whose sum is convex and has a Lipschitz continuous gradient, convergence to the optimal value can be ensured when using a constant step size, even if some of the individual cost functions are nonconvex. In addition, exponential convergence to the optimal solution is achieved when the global cost function is further assumed to be strongly convex. In these cases, the local optimization variables reach consensus in finite time, and then behave as they would under the centralized subgradient method applied to the global problem, except on a slower time scale. The second algorithm is specialized for the case of quadratic cost functions and converges in finite time to the optimal solution. Simulation examples are given to illustrate the algorithms.
Increasing left ventricular mass in hypertensive patients is an independent prognostic marker for adverse cardiovascular outcomes. Genetic factors have been shown to critically affect left ...ventricular mass.
M235T is one of the genetic polymorphisms that may influence left ventricular mass due to its pivotal role in the regulation of plasma angiotensinogen level as well as hypertension pathophysiology in Asian populations. Currently, how M235T affects left ventricular mass is not well-described in Vietnamese hypertensive patients. This study aimed to investigate the association between M235T and left ventricular mass in Vietnamese patients diagnosed with essential hypertension.
M235T genotyping and 2D echocardiography were performed on 187 Vietnamese subjects with essential hypertension. All the ultrasound parameters were obtained to calculate the left ventricular mass index according to the American Society of Echocardiography and the European Association of Cardiovascular Imaging 2015 guidelines. Other clinical characteristics were also recorded, including age, gender, duration of hypertension, hypertensive treatment, lifestyle, renal function, fasting plasma glucose, and lipid profile.
MT and TT genotypes were determined in 30 and 157 subjects, respectively.
M235T genotype, duration of hypertension, body mass index, and ejection fraction statistically affected the left ventricular mass index, which was significantly greater in TT compared to MT carriers after adjusting for confounding factors.
The TT genotype of
M23T was associated with greater left ventricular mass in Vietnamese patients diagnosed with essential hypertension.
We study the problem of minimizing the (time) average security costs in large networks/systems comprising many interdependent subsystems, where the state evolution is captured by a ...susceptible-infected-susceptible (SIS) model. The security costs reflect security investments, economic losses and recovery costs from infections and failures following successful attacks. We show that the resulting optimization problem is nonconvex and propose a suite of algorithms - two based on convex relaxations, and the other two for finding a local minimizer, based on a reduced gradient method and sequential convex programming. Also, we provide a sufficient condition under which the convex relaxations are exact and, hence, an optimal solution of the original problem can be recovered. Numerical results are provided to validate our analytical results and to demonstrate the effectiveness of the proposed algorithms.
Federated Learning (FL) has emerged as a means of distributed learning using local data stored at clients with a coordinating server. Recent studies showed that FL can suffer from poor performance ...and slower convergence when training data at the clients are not independent and identically distributed (IID). Here, we consider auxiliary server learning as a complementary approach to improving the performance of FL on non-IID data. Our analysis and experiments show that this approach can achieve significant improvements in both model accuracy and convergence time even when the dataset utilized by the server is small and its distribution differs from that of the clients' aggregate data. Moreover, experimental results suggest that auxiliary server learning delivers benefits when employed together with other techniques proposed to mitigate the performance degradation of FL on non-IID data.