Sparse logistic regression, as an effective tool of classification, has been developed tremendously in recent two decades, from its origination the ℓ1-regularized version to the sparsity constrained ...models. This paper is carried out on the sparsity constrained logistic regression by the Newton method. We begin with establishing its first-order optimality condition associated with a τ-stationary point. This point can be equivalently interpreted as a system of equations which is then efficiently solved by the Newton method. The method has a considerably low computational complexity and enjoys global and quadratic convergence properties. Numerical experiments on random and real data demonstrate its superior performance when against seven state-of-the-art solvers.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPUK, ZAGLJ, ZRSKP
In recent papers, some fractional Newton-type methods have been proposed by using the Riemann–Liouville and Caputo fractional derivatives in their iterative schemes, with order 2α or 1+α. In this ...manuscript, we introduce the Conformable fractional Newton-type method by using the so-called fractional derivative. The convergence analysis is made, proving its quadratic convergence, and the numerical results confirm the theory and improve the results obtained by classical Newton’s method. Unlike previous fractional Newton-type methods, this one involves a low computational cost, and the order of convergence is at least quadratic.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
We interpolate planar rigid body motions by piecewise rotational and translational motions. The trajectories of all points are then arc splines, i.e., curves composed of circular arcs or line ...segments. For objects with arc spline boundaries, the boundary of the volume swept by the object and the motion is then again an arc spline. We prove that the distance between the original motion and its approximation as between curves in a suitable kinematic image space converges quadratically to zero. This implies the same rate of convergence between point trajectories under the two motions.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPUK, ZAGLJ, ZRSKP
Hard-thresholding-based algorithms have seen various advantages for sparse optimization in controlling the sparsity and allowing for fast computation. Recent research shows that when techniques of ...the Newton-type methods are integrated, their numerical performance can be improved surprisingly. This paper develops a gradient projection Newton pursuit algorithm that mainly adopts the hard-thresholding operator and employs the Newton pursuit only when certain conditions are satisfied. The proposed algorithm is capable of converging globally and quadratically under the standard assumptions. When it comes to compressive sensing problems, the imposed assumptions are much weaker than those for many state-of-the-art algorithms. Moreover, extensive numerical experiments have demonstrated its high performance in comparison with the other leading solvers.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
•Consider the quasilinearization technique for nonlinear boundary value problems.•Introduce and prove a fresh comparison principle of Riemann-Liouville fractional-order differential equations.•Depict ...the algorithm to construct the monotone sequences.•Show uniform and quadratical convergence of approximate sequences of solutions.•A numerical example illustrates the efficiency of the regularization methods established.
In this work, we deal with the quasilinearization technique for a class of nonlinear Riemann-Liouville fractional-order two-point boundary value problems. Using quasilinearization technique, we construct a monotone sequence of approximate solutions which has quadratic convergence to the unique solution of the original problem, and establish the corresponding convergence estimates. Moreover, the performance of the technique is examined through a numerical example, which shows that our regularization method is available and stable.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
In this paper, we propose a Gauss-Newton algorithm to recover an n-dimensional signal from its phaseless measurements. The algorithm has two stages. In the first stage, the algorithm obtains a good ...initialization by calculating the eigenvector corresponding to the largest eigenvalue of a Hermitian matrix. In the second stage, the algorithm solves an optimization problem iteratively using the Gauss-Newton method. Our initialization method makes full use of all measurements and provides a good initial guess, as long as the number of random measurements is O(n). For real-valued signals, we prove that a resampled version of Gauss-Newton iterations converges to the global optimal solution quadratically with O(n log n) random measurements. Numerical experiments show that the Gauss-Newton method has better empirical performance than other algorithms, such as the Wirtinger flow algorithm and Altmin phase algorithm.
The tensor complementarity problem over circular cone (CCTCP for short) is studied, which is a specially structured nonlinear complementarity problem. Useful properties of the circular cone help to ...reformulate equivalently CCTCP as an implicit fixed-point equation. Based on the smoothing functions, we reformulate the obtained fixed-point equation as a family of parameterized smoothing equations. Moreover, we propose a modified Levenberg–Marquardt (LM) algorithm to solve the problem iteratively and show that the sequence generated by the new algorithm converges to a solution quadratically under suitable conditions. Preliminary numerical results demonstrate that the proposed algorithm is effective.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPUK, ZAGLJ, ZRSKP
In this paper, we study the convergence properties of the natural gradient methods. By reviewing the mathematical condition for the equivalence between the Fisher information matrix and the ...generalized Gauss-Newton matrix, as well as the comparisons on the computation and storage, we reveal the popularity of the natural gradient method. To ensure the global convergence, an adaptively regularized natural gradient method is proposed. By requiring sufficient probabilistic accurate estimations on both the function and the gradient evaluations, we establish the almost sure convergence. In the local convergence, we employ the local error bound condition and show the convergence rate can be quadratic by adding mild assumptions on the stochastic estimates of gradients and Fisher information matrices. Preliminary numerical experiments on the regularized logistic regression are performed to support our findings.