Totally nonnegative matrices Fallat, Shaun M; Fallat, Shaun M; Johnson, Charles R
2011., 20110411, 2011, 2011-04-11, Letnik:
35
eBook
Odprti dostop
Totally nonnegative matrices arise in a remarkable variety of mathematical applications. This book is a comprehensive and self-contained study of the essential theory of totally nonnegative matrices, ...defined by the nonnegativity of all subdeterminants. It explores methodological background, historical highlights of key ideas, and specialized topics.
El presente trabajo pretende abordar, desde las sistematizaciones de nuestros interrogantes como equipo docente, una relectura crítica de los aportes instrumentales de Pichon-Rivière y Paulo Freire. ...Esta relectura la proponemos hacer desde los aportes de las teorías descoloniales y de los feminismos nuestroamericanos, advirtiendo el diálogo inmanente que nuestra profesión mantiene con los contextos de emergencia inmediatos, así como con los condicionantes estructurales. Ambos aspectos, en el actual contexto de pandemia mundial, agudizan su incidencia en la reproducción cotidiana de la existencia de les sujetes con les que trabajamos. La intencionalidad teórica, epistemológica y política es pensar el diálogo e integración de estos debates bajo una mirada operacional en la construcción de herramientas instrumentales como la coordinación y la observación en el campo de intervención profesional. Integrar estos aportes es ampliar nuestra mirada desde dimensiones interseccionales de la Cuestión Social, haciendo dialogar las matrices epistémicas de los feminismos nuestromericanos y teorías descoloniales, con la historización y politización de los elementos táctico-operativos o instrumentales del trabajo social grupal.
Intensive research in matrix completions, moments, and sums of Hermitian squares has yielded a multitude of results in recent decades. This book provides a comprehensive account of this quickly ...developing area of mathematics and applications and gives complete proofs of many recently solved problems. With MATLAB codes and more than 200 exercises, the book is ideal for a special topics course for graduate or advanced undergraduate students in mathematics or engineering, and will also be a valuable resource for researchers.
This book offers a detailed treatment of the mathematical theory of Krylov subspace methods with focus on solving systems of linear algebraic equations. Starting from the idea of projections, Krylov ...subspace methods are characterised by their orthogonality and minimisation properties. Projections onto highly nonlinear Krylov subspaces can be linked with the underlying problem of moments, and therefore Krylov subspace methods can be viewed as matching moments model reduction. This allows enlightening reformulations of questions from matrix computations into the language of orthogonal polynomials, Gauss–Christoffel quadrature, continued fractions, and, more generally, of Vorobyev method of moments. Using the concept of cyclic invariant subspaces conditions are studied that allow generation of orthogonal Krylov subspace bases via short recurrences. The results motivate the practically important distinction between Hermitian and non-Hermitian problems. Finally, the book thoroughly addresses the computational cost while using Krylov subspace methods. The investigation includes effects of finite precision arithmetic and focuses on the method of conjugate gradients (CG) and generalised minimal residuals (GMRES) as major examples. The book emphasises that algebraic computations must always be considered in the context of solving real-world problems, where the mathematical modelling, discretisation, and computation cannot be separated from each other. Moreover, the book underlines the importance of the historical context and it demonstrates that knowledge of early developments can play an important role in understanding and resolving very recent computational problems. Many extensive historical notes are therefore included as an inherent part of the text. The book ends with formulating some omitted issues and challenges which need to be addressed in future work. The book is intended as a research monograph which can be used in a wide scope of graduate courses on related subjects. It can be beneficial also for readers interested in the history of mathematics.
Estimating covariance matrices is a problem of fundamental importance in multivariate statistics. In practice it is increasingly frequent to work with data matrices X of dimension n × p, where p and ...n are both large. Results from random matrix theory show very clearly that in this setting, standard estimators like the sample covariance matrix perform in general very poorly. In this "large n, large p" setting, it is sometimes the case that practitioners are willing to assume that many elements of the population covariance matrix are equal to 0, and hence this matrix is sparse. We develop an estimator to handle this situation. The estimator is shown to be consistent in operator norm, when, for instance, we have $p\asymp n$ as n → ∞. In other words the largest singular value of the difference between the estimator and the population covariance matrix goes to zero. This implies consistency of all the eigenvalues and consistency of eigenspaces associated to isolated eigenvalues. We also propose a notion of sparsity for matrices, that is, "compatible" with spectral analysis and is independent of the ordering of the variables.
In this paper, we consider the problem of domain adaptation. We propose to view the data through the lens of covariance matrices and present a method for domain adaptation using parallel transport on ...the cone manifold of symmetric positive-definite matrices. We provide rigorous analysis using Riemannian geometry, illuminating the theoretical guarantees and benefits of the presented method. In addition, we demonstrate these benefits using experimental results on simulations and real-measured data.
Random matrix theory, both as an application and as a theory, has evolved rapidly over the past fifteen years. Log-Gases and Random Matrices gives a comprehensive account of these developments, ...emphasizing log-gases as a physical picture and heuristic, as well as covering topics such as beta ensembles and Jack polynomials.
Current spectral compressed sensing methods via Hankel matrix completion employ symmetric factorization to demonstrate the low-rank property of the Hankel matrix. However, previous non-convex ...gradient methods only utilize asymmetric factorization to achieve spectral compressed sensing. In this paper, we propose a novel nonconvex projected gradient descent method for spectral compressed sensing via symmetric factorization named Symmetric Hankel Projected Gradient Descent (SHGD), which updates only one matrix and avoids a balancing regularization term. SHGD reduces about half of the computation and storage costs compared to the prior gradient method based on asymmetric factorization. Besides, the symmetric factorization employed in our work is completely novel to the prior low-rank factorization model, introducing a new factorization ambiguity under complex orthogonal transformation. Novel distance metrics are designed for our factorization method and a linear convergence guarantee to the desired signal is established with O ( r 2 log( n )) observations. Numerical simulations demonstrate the superior performance of the proposed SHGD method in phase transitions and computation efficiency compared to state-of-the-art methods.
A randomized algorithm for computing a data sparse representation of a given rank-structured matrix A (a.k.a. an H-matrix) is presented. The algorithm draws on the randomized singular value ...decomposition (RSVD), and operates under the assumption that methods for rapidly applying A and A∗ to vectors are available. The algorithm uses graph coloring algorithms to analyze the hierarchical tree that defines the rank structure to generate a tailored probability distribution from which to draw the random test matrices. The matrix is then applied to the test matrices, and in a final step the matrix itself is reconstructed by the observed input–output pairs. The method presented is an evolution of the “peeling algorithm” of Lin et al. (2011). For the case of uniform trees, the new method substantially reduces the pre-factor of the original peeling algorithm. More significantly, the new technique leads to dramatic acceleration for many non-uniform trees since it constructs test matrices that are optimized for a given tree. The algorithm is particularly effective for kernel matrices involving a set of points restricted to a lower dimensional object than the ambient space, such as a boundary integral equation defined on a surface in three dimensions.