In view of some shortcomings of traditional vertex 1-center (V1C), we introduce a vertex quickest 1-center (VQ1C) problem on a tree, which aims to find a vertex such that the maximum transmission ...time to transmit Formula omitted units data is minimum. We first characterize some intrinsic properties of VQ1C and design a binary search algorithm in Formula omitted time based on the relationship between V1C and VQ1C, where n is the number of vertices. Furthermore, we investigate the inverse VQ1C problem under weighted Formula omitted norm, in which we modify a given capacity vector in an optimal way such that a prespecified vertex becomes the vertex quickest 1-center. We introduce a concept of an effective modification and provide some optimality conditions for the problem. Then we propose an Formula omitted time algorithm. Finally, we show some numerical experiments to verify the efficiency of the algorithms.
The max+sum spanning tree (MSST) problem is to determine a spanning tree T whose combined weight Formula omitted is minimum for a given edge-weighted undirected network G(V, E, c, w). This problem ...can be solved within Formula omitted time, where m and n are the numbers of edges and nodes, respectively. An inverse MSST problem (IMSST) aims to determine a new weight vector Formula omitted so that a given Formula omitted becomes an optimal MSST of the new network Formula omitted. The IMSST problem under weighted Formula omitted norm is to minimize the modification cost Formula omitted, where q(e) is a cost modifying the weight w(e). We first provide some optimality conditions of the problem. Then we propose a strongly polynomial time algorithm in Formula omitted time based on a binary search and a greedy method. There are O(m) iterations and we solve an MSST problem under a new weight in each iteration. Finally, we perform some numerical experiments to show the efficiency of the algorithm.
Maximum a posteriori (MAP) inference is an important task for graphical models. Due to complex dependencies among variables in realistic models, finding an exact solution for MAP inference is often ...intractable. Thus, many approximation methods have been developed, among which the linear programming (LP) relaxation based methods show promising performance. However, one major drawback of LP relaxation is that it is possible to give fractional solutions. Instead of presenting a tighter relaxation, in this work we propose a continuous but equivalent reformulation of the original MAP inference problem, called LS-LP. We add the Formula omitted-sphere constraint onto the original LP relaxation, leading to an intersected space with the local marginal polytope that is equivalent to the space of all valid integer label configurations. Thus, LS-LP is equivalent to the original MAP inference problem. We propose a perturbed alternating direction method of multipliers (ADMM) algorithm to optimize the LS-LP problem, by adding a sufficiently small perturbation Formula omitted onto the objective function and constraints. We prove that the perturbed ADMM algorithm globally converges to the Formula omitted-Karush-Kuhn-Tucker ( Formula omitted-KKT) point of the LS-LP problem. The convergence rate will also be analyzed. Experiments on several benchmark datasets from Probabilistic Inference Challenge (PIC 2011) and OpenGM 2 show competitive performance of our proposed method against state-of-the-art MAP inference methods.
In this paper, we present a global optimization method for solving nonconvex mixed integer nonlinear programming (MINLP) problems. A convex overestimation of the feasible region is obtained by ...replacing the nonconvex constraint functions with convex underestimators. For signomial functions single-variable power and exponential transformations are used to obtain the convex underestimators. For more general nonconvex functions two versions of the so-called alphaBB-underestimator, valid for twice-differentiable functions, are integrated in the actual reformulation framework. However, in contrast to what is done in branch-and-bound type algorithms, no direct branching is performed in the actual algorithm. Instead a piecewise convex reformulation is used to convexify the entire problem in an extended variable-space, and the reformulated problem is then solved by a convex MINLP solver. As the piecewise linear approximations are made finer, the solution to the convexified and overestimated problem will form a converging sequence towards a global optimal solution. The result is an easily-implementable algorithm for solving a very general class of optimization problems. Keywords Global optimization * Reformulation technique * Convex underestimators * Mixed integer nonlinear programming * Twice-differentiable functions * Signomial functions * Piecewise linear functions * alphaBB-underestimator * SGO-algorithm
It is proved that any DCA sequence constructed by Pham Dinh-Le Thi's algorithm for the trust-region subproblem (Pham Dinh and Le Thi, in SIAM J. Optim. 8:476-505, 1998) converges to a ...Karush-Kuhn-Tucker point of the problem. This result provides a complete solution for one open question raised by Le Thi et al. (J. Global Optim., Online First, doi:10.1007/s10898-011-9696-z, 2010). Keywords Trust-region subproblem * d.c. Algorithm * DCA sequence * Convergence * KKT point
In this paper, we focus on the approximation of smooth functions Formula omitted, up to an unresolvable global phase ambiguity, from a finite set of Short Time Fourier Transform (STFT) magnitude ...(i.e., spectrogram) measurements. Two algorithms are developed for approximately inverting such measurements, each with theoretical error guarantees establishing their correctness. A detailed numerical study also demonstrates that both algorithms work well in practice and have good numerical convergence behavior.
Numerical optimization has played a central role in addressing key signal processing (SP) problems. Highly effective methods have been developed for a large variety of SP applications such as ...communications, radar, filter design, and speech and image analytics, just to name a few. However, optimization algorithms often entail considerable complexity, which creates a serious gap between theoretical design/analysis and real-time processing. In this paper, we aim at providing a new learning-based perspective to address this challenging issue. The key idea is to treat the input and output of an SP algorithm as an unknown nonlinear mapping and use a deep neural network (DNN) to approximate it. If the nonlinear mapping can be learned accurately by a DNN of moderate size, then SP tasks can be performed effectively-since passing the input through a DNN only requires a small number of simple operations. In our paper, we first identify a class of optimization algorithms that can be accurately approximated by a fully connected DNN. Second, to demonstrate the effectiveness of the proposed approach, we apply it to approximate a popular interference management algorithm, namely, the WMMSE algorithm. Extensive experiments using both synthetically generated wireless channel data and real DSL channel data have been conducted. It is shown that, in practice, only a small network is sufficient to obtain high approximation accuracy, and DNNs can achieve orders of magnitude speedup in computational time compared to the state-of-the-art interference management algorithm.