We consider the distributed convex optimization scenario in which nodes in a network collectively find the minimum of a function utilizing only local communications and computations. Various ...sub-gradient algorithms have been developed for this optimization setting for the case in which the global function factorizes as the sum of local functions to be distributed to the nodes in network, including the distributed (i) online gradient descent, (ii) Nesterov gradient descent, and (iii) dual averaging algorithms. Generally speaking, these subgradient algorithms assume that, in each communication round, nodes can exchange messages of size comparable to that of the optimization variable. For many high-dimensional optimization problems, this communication requirement is beyond the capabilities of the communication network supporting the distributed optimization. For this reason, we propose a dimensionality reduction technique based on random projections to adapt these distributed optimization algorithms to networks with communication links of limited capacity. In particular, we analyze the performance of the proposed dimensionality reduction technique for the three algorithms above, i.e. (i)-(iii). Numerical simulations are presented that illustrate the performance of the proposed approach for these three algorithms.
We introduce a new algorithm for the unconstrained optimization, based on introducing the random parameter θ∈(0,1) in the standard iterative scheme for the descent gradient method. Thus, some recent ...results are improved.
This paper extends the idea of Brezinski’s hybrid acceleration procedure, for the solution of a system of linear equations with a symmetric coefficient matrix of dimension
n
, to a new context called ...cooperative computation, involving
m
agents (
m
≪
n
), each one concurrently computing the solution of the whole system, using an iterative method. Cooperation occurs between the agents through the communication, periodic or probabilistic, of the estimate of each agent to one randomly chosen agent, thus characterizing the computation as concurrent and asynchronous. Every time one agent receives solution estimates from the others, it carries out a least squares computation, involving a small linear system of dimension
m
, in order to replace its current solution estimate by an affine combination of the received estimates, and the algorithm continues until a stopping criterion is met. In addition, an autocooperative algorithm, in which estimates are updated using affine combinations of current and past estimates, is also proposed. The proposed algorithms are shown to be efficient for certain matrices, specifically in relation to the popular Barzilai–Borwein algorithm, through numerical experiments.
In this paper we introduce an acceleration of gradient descent algorithm with backtracking. The idea is to modify the steplength tk by means of a positive parameter θk, in a multiplicative manner, in ...such a way to improve the behaviour of the classical gradient algorithm. It is shown that the resulting algorithm remains linear convergent, but the reduction in function value is significantly improved.
Multi-agent systems are being increasingly deployed in challenging environments for performing complex tasks such as multi-target tracking, search-and-rescue, and intrusion detection. This paper ...formulates the generic target tracking problem as a time-varying optimization problem and puts forth an inexact online gradient descent method for solving it sequentially. The performance of the proposed algorithm is studied by characterizing its dynamic regret, a notion common to the online learning literature. Building upon the existing results, we provide improved regret rates that not only allow non-strongly convex costs but also explicating the role of the cumulative gradient error. The objective function is convex but the variable belongs to a compact domain. The efficacy of the proposed inexact gradient framework is established on a multi-agent multi-target tracking problem.
We introduced an algorithm for unconstrained optimization based on the transformation of the Newton method with the line search into a gradient descent method. Main idea used in the algorithm ...construction is approximation of the Hessian by an appropriate diagonal matrix. The steplength calculation algorithm is based on the Taylor’s development in two successive iterative points and the backtracking line search procedure. The linear convergence of the algorithm is proved for uniformly convex functions and strictly convex quadratic functions satisfying specified conditions.
We introduce a gradient descent algorithm for solving large scale unconstrained nonlinear optimization problems. The computation of the initial trial steplength is based on the usage of both the ...quasi-Newton property and the Hessian inverse approximation by an appropriate scalar matrix. The nonmonotone line search technique for the steplength calculation is applied later. The computational and storage complexity of the new method is equal to the computational and storage complexity of the Barzilai and Borwein method. On the other hand, the reported numerical results indicate improvements in favor of the new method with respect to the well known global Barzilai and Borwein method.