We propose an algorithm for solving nonsmooth, nonconvex, constrained optimization problems as well as a new set of visualization tools for comparing the performance of optimization algorithms. Our ...algorithm is a sequential quadratic optimization method that employs Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton Hessian approximations and an exact penalty function whose parameter is controlled using a steering strategy. While our method has no convergence guarantees, we have found it to perform very well in practice on challenging test problems in controller design involving both locally Lipschitz and non-locally-Lipschitz objective and constraint functions with constraints that are typically active at local minimizers. In order to empirically validate and compare our method with available alternatives-on a new test set of 200 problems of varying sizes-we employ new visualization tools which we call relative minimization profiles. Such profiles are designed to simultaneously assess the relative performance of several algorithms with respect to objective quality, feasibility, and speed of progress, highlighting the trade-offs between these measures when comparing algorithm performance.
We consider an identification method for a linear continuous time-invariant autonomous system from noisy state observations. In particular, we focus on the identification to satisfy the asymptotic ...stability of the system with some prior knowledge. To this end, we propose to model this identification problem as a Riemannian nonlinear optimization (RNLO) problem, where the stability is ensured through a certain Riemannian manifold and the prior knowledge is expressed as nonlinear constraints defined on this manifold. To solve this RNLO, we apply the Riemannian sequential quadratic optimization (RSQO) that was proposed by Obara, Okuno, and Takeda (2022) most recently. RSQO performs quite well with theoretical guarantee to find a point satisfying the Karush-Kuhn-Tucker conditions of RNLO. In this article, we demonstrate that the identification problem can be indeed solved by RSQO more effectively than competing algorithms.
A worst-case complexity bound is proved for a sequential quadratic optimization (commonly known as SQP) algorithm that has been designed for solving optimization problems involving a stochastic ...objective function and deterministic nonlinear equality constraints. Barring additional terms that arise due to the adaptivity of the monotonically nonincreasing merit parameter sequence, the proved complexity bound is comparable to that known for the stochastic gradient algorithm for unconstrained nonconvex optimization. The overall complexity bound, which accounts for the adaptivity of the merit parameter sequence, shows that a result comparable to the unconstrained setting (with additional logarithmic factors) holds with high probability.
In this paper, we propose a new sequential quadratic optimization algorithm for solving two-block nonconvex optimization with linear equality and generalized box constraints. First, the idea of the ...splitting algorithm is embedded in the method for solving the quadratic optimization approximation subproblem of the discussed problem, and then, the subproblem is decomposed into two independent low-dimension quadratic optimization subproblems to generate a search direction for the primal variable. Second, a deflection of the steepest descent direction of the augmented Lagrangian function with respect to the dual variable is considered as the search direction of the dual variable. Third, using the augmented Lagrangian function as the merit function, a new primal–dual iterative point is generated by Armijo line search. Under mild conditions, the global convergence of the proposed algorithm is proved. Finally, the proposed algorithm is applied to solve a series of mid-to-large-scale economic dispatch problems for power systems. Comparing the numerical results demonstrates that the proposed algorithm possesses superior numerical effects and good robustness.
We present a sequential quadratic optimization (SQO) algorithm for nonlinear constrained optimization. The method attains all of the strong global and fast local convergence guarantees of classical ...SQO methods, but has the important additional feature that fast local convergence is guaranteed when the algorithm is employed to solve infeasible instances. A two-phase strategy, carefully constructed parameter updates, and a line search are employed to promote such convergence. The first phase subproblem determines the reduction that can be obtained in a local model of an infeasibility measure when the objective function is ignored. The second phase subproblem then seeks to minimize a local model of the objective while ensuring that the resulting search direction attains a reduction in the local model of the infeasibility measure that is proportional to that attained in the first phase. The subproblem formulations and parameter updates ensure that, near an optimal solution, the algorithm reduces to a classical SQO method for constrained optimization, and, near an infeasible stationary point, the algorithm reduces to a (perturbed) SQO method for minimizing constraint violation. Global and local convergence guarantees for the algorithm are proved under reasonable assumptions and numerical results are presented for a large set of test problems. PUBLICATION ABSTRACT
We propose a sequential quadratic optimization method for solving nonlinear optimization problems with equality and inequality constraints. The novel feature of the algorithm is that, during each ...iteration, the primal-dual search direction is allowed to be an inexact solution of a given quadratic optimization subproblem. We present a set of generic, loose conditions that the search direction (i.e., inexact subproblem solution) must satisfy so that global convergence of the algorithm for solving the nonlinear problem is guaranteed. The algorithm can be viewed as a globally convergent inexact Newton-based method. The results of numerical experiments are provided to illustrate the reliability of the proposed numerical method. PUBLICATION ABSTRACT
Several spectral unmixing techniques have been developed for subpixel mapping using hyperspectral data in the past two decades, among which the fully constrained least squares method based on the ...linear spectral mixture model (LSMM) has been widely accepted. However, the shortage of this method is that the Euclidean spectral distance measure is used, and therefore, it is sensitive to the magnitude of the spectra. While other spectral matching criteria are available, such as spectral angle mapping (SAM) and spectral information divergence (SID), the current unmixing algorithm is unable to be extended to these measures. In this paper, we propose a unified subpixel mapping framework that models the unmixing process as a best match of the unknown pixel's spectrum to a weighted sum of the endmembers' spectra. We introduce sequential quadratic programming to solve the nonlinear optimization problem encountered in the implementation of this framework. The main feature of this proposed method is that it is not restricted to any particular similarity measures. Experiments were conducted with both simulated and Hyperion data. The tests demonstrated the proposed framework's advantage in accommodating various spectral similarity measures and provided performance comparisons of the Euclidean distance measure with other spectral matching criteria including SAM, spectral correlation measure, and SID.