It is well known that the bivariate polynomial interpolation problem at uniformly distributed domain points of a triangle is correct. Thus the corresponding interpolation matrix M is nonsingular. ...Schumaker stated the conjecture that all principal submatrices of M are nonsingular too. Furthermore, all of the corresponding determinants (the principal minors) are conjectured to be positive. This result would solve the constrained interpolation problem. In this paper, the conjecture on minors for polynomial degree ⩽17 and conjecture for some particular configurations of domain points are confirmed.
The sepr-sets of sign patterns Hogben, Leslie; Lin, Jephian C.-H.; Olesky, D.D. ...
Linear & multilinear algebra,
10/2020, Volume:
68, Issue:
10
Journal Article
Peer reviewed
Given a real symmetric
matrix, the sepr-sequence
records information about the existence of principal minors of each order that are positive, negative, or zero. This paper extends the notion of the ...sepr-sequence to matrices whose entries are of prescribed signs, that is, to sign patterns. A sufficient condition is given for a sign pattern to have a unique sepr-sequence, and it is conjectured to be necessary. The sepr-sequences of sign semi-stable patterns are shown to be well-structured; in some special circumstances, the sepr-sequence is enough to guarantee that the sign pattern is sign semi-stable. In alignment with previous work on symmetric matrices, the sepr-sequences for sign patterns realized by symmetric nonnegative matrices of orders two and three are characterized.
To generalize D-nilpotent matrices that play a role in study of Druzkowski maps, we introduce quasi-D-nilpotent matrices. A matrix A is called quasi-D-nilpotent if there exists a subspace V of ...diagonal matrices of codimension 1 such that DA is nilpotent for all Formula omitted. . It is proved that a quasi-D-nilpotent matrix has few nonzero principal minors. We also determine irreducible quasi-D-nilpotent matrices and the Frobenius normal forms of quasi-D-nilpotent matrices with respect to permutation similarity.
We study an inverse eigenvalue problem (IEP) of reconstructing a special kind of symmetric acyclic matrices whose graph is a generalized star graph. The problem involves the reconstruction of a ...matrix by the minimum and maximum eigenvalues of each of its leading principal submatrices. To solve the problem, we use the recurrence relation of characteristic polynomials among leading principal minors. The necessary and sufficient conditions for the solvability of the problem are derived. Finally, a numerical algorithm and some examples are given.
In this work, we consider
particular matrices
and
whose entries are
and
, respectively, where
is a real number. We derive relationships between generalized Fibonacci numbers and the characteristic ...polynomials of these matrices.
Given a vector u∈R2n, the principal minor assignment problem asks when is there an n×n matrix having its 2n principal minors given by u. This paper explores the following related problem. Given a ...sequence r0r1⋯rn of 0s and 1s, does there exist an n×n real symmetric matrix that has a principal submatrix of rank k if and only if rk=1, for all 0⩽k⩽n? Certain conditions are shown to be necessary in order for this question to have an affirmative answer. Several families of matrices are constructed to attain certain classes of sequences. The problem is solved completely for n⩽6, and for 7⩽n⩽10 in the case of sequences beginning with 010.
Recently, raw experimental data in machine learning often appear as direct comparisons between objects (featureless data). Different ways to evaluate difference or similarity of a pair of objects in ...image and data mining, image analysis, bioinformatics, etc., are usually used in practice. Nevertheless, such comparisons often are not distances or correlations (scalar products) like a correct function defined on a limited set of elements in machine learning. This problem is denoted as metric violations in ill-posed matrices. Therefore, it needs to recover violated metrics and provide optimal conditionality of corresponding matrices of pairwise comparisons for distances and similarities. This is the correct basis for using of modern machine learning algorithms.
The Cayley transform, $(A)=(I−A)(I+A)−1, maps skew-symmetric matrices to orthogonal matrices and vice versa. Given an orthogonal matrix Q, we can choose a diagonal matrix D with each diagonal entry ...±1 (a signature matrix) and, if I+QD is nonsingular, calculate the skew-symmetric matrix $(QD). An open problem is to show that, by a suitable choice of D, we can make every entry of $(QD) less than or equal to 1 in absolute value. We solve this problem by showing that the principal minors of $(QD) are related in a simple way to the principal minors of $(Q).
In this paper, we compute the spectral norms of the matrices related with integer sequences and we give two examples related with Fibonacci, Lucas, Pell and Perrin numbers.