The incidence of neurodegenerative diseases has shown an increasing trend. These conditions typically cause progressive functional disability. Identification of robust biomarkers of neurodegenerative ...diseases is a key imperative to facilitate early identification of the pathological features and to foster a better understanding of the pathogenetic mechanisms of individual diseases. Diffusion tensor imaging (DTI) is the most widely used diffusion MRI technique for assessment of neurodegenerative diseases. The DTI parameters are promising biomarkers for evaluation of microstructural changes; however, some limitations of DTI restrict its wider clinical use. New diffusion MRI techniques, such as diffusion kurtosis imaging (DKI), bi‐tensor DTI, and neurite orientation density and dispersion imaging (NODDI) have been demonstrated to provide value addition to DTI for evaluation of neurodegenerative diseases. In this review article, we summarize the key technical aspects and provide an overview of the current state of knowledge regarding the role of DKI, bi‐tensor DTI, and NODDI as biomarkers of microstructural changes in representative neurodegenerative diseases including Alzheimer's disease, Parkinson's disease, amyotrophic lateral sclerosis, and Huntington's disease.
Level of Evidence
5
Technical Efficacy Stage
2 J. MAGN. RESON. IMAGING 2020;52:1620–1636.
Most Tensor Problems Are NP-Hard HILLAR, Christopher J; LIM, Lek-Heng
Journal of the ACM,
11/2013, Letnik:
60, Številka:
6
Journal Article
Recenzirano
Odprti dostop
We prove that multilinear (tensor) analogues of many efficiently computable problems in numerical linear algebra are NP-hard. Our list includes: determining the feasibility of a system of bilinear ...equations, deciding whether a 3-tensor possesses a given eigenvalue, singular value, or spectral norm; approximating an eigenvalue, eigenvector, singular vector, or the spectral norm; and determining the rank or best rank-1 approximation of a 3-tensor. Furthermore, we show that restricting these problems to symmetric tensors does not alleviate their NP-hardness. We also explain how deciding nonnegative definiteness of a symmetric 4-tensor is NP-hard and how computing the combinatorial hyperdeterminant is NP-, #P-, and VNP-hard.
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture ...hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
In this paper, firstly, we introduce the variant tensor splittings, and present some equivalent conditions for a strong M-tensor based on the tensor splitting. Secondly, the existence and uniqueness ...conditions of the solution for multi-linear systems are given. Thirdly, we propose some tensor splitting algorithms for solving multi-linear systems with coefficient tensor being a strong M-tensor. As an application, a tensor splitting algorithm for solving the multi-linear model of higher order Markov chains is proposed. Numerical examples are given to demonstrate the efficiency of the proposed algorithms.
•Define a new tensor unfolding to unfold an N-way tensor into a three-way tensor.•Propose a novel tensor rank for N-way tensors based on the new tensor unfolding.•Establish a convex relaxation for ...efficiently minimizing the proposed tensor rank.•Apply the proposed relaxation to tensor recovery problems with ADMM-based solver.
The recent popular tensor tubal rank, defined based on tensor singular value decomposition (t-SVD), yields promising results. However, its framework is applicable only to three-way tensors and lacks the flexibility necessary tohandle different correlations along different modes. To tackle these two issues, we define a new tensor unfolding operator, named mode-k1k2 tensor unfolding, as the process of lexicographically stacking all mode-k1k2 slices of an N-way tensor into a three-way tensor, which is a three-way extension of the well-known mode-k tensor matricization. On this basis, we define a novel tensor rank, named the tensor N-tubal rank, as a vector consisting of the tubal ranks of all mode-k1k2 unfolding tensors, to depict the correlations along different modes. To efficiently minimize the proposed N-tubal rank, we establish its convex relaxation: the weighted sum of the tensor nuclear norm (WSTNN). Then, we apply the WSTNN to low-rank tensor completion (LRTC) and tensor robust principal component analysis (TRPCA). The corresponding WSTNN-based LRTC and TRPCA models are proposed, and two efficient alternating direction method of multipliers (ADMM)-based algorithms are developed to solve the proposed models. Numerical experiments demonstrate that the proposed models significantly outperform the compared ones.
In this paper, we present the definition of generalized tensor function according to the tensor singular value decomposition (T-SVD) based on the tensor T-product. Also, we introduce the compact ...singular value decomposition (T-CSVD) of tensors, from which the projection operators and Moore-Penrose inverse of tensors are obtained. We establish the Cauchy integral formula for tensors by using the partial isometry tensors and apply it into the solution of tensor equations. Then we establish the generalized tensor power and the Taylor expansion of tensors. Explicit generalized tensor functions are listed. We define the tensor bilinear and sesquilinear forms and propose theorems on structures preserved by generalized tensor functions. For complex tensors, we established an isomorphism between complex tensors and real tensors. In the last part of our paper, we find that the block circulant operator establishes an isomorphism between tensors and matrices. This isomorphism is used to prove the F-stochastic structure is invariant under generalized tensor functions. The concept of invariant tensor cones is raised.
P- and P0-matrix classes have wide applications in mathematical analysis, linear and nonlinear complementarity problems, etc., since they contain many important special matrices, such as positive ...(semi-)definite matrices, M-matrices, diagonally dominant matrices, etc. By modifying the existing definitions of P- and P0-tensors that work only for even order tensors, in this paper, we propose a homogeneous formula for the definition of P- and P0-tensors. The proposed P- and P0-tensor classes coincide the existing ones of even orders and include many important structured tensors of odd orders. We show that many checkable classes of structured tensors, such as the nonsingular M-tensors, the nonsingular H-tensors with positive diagonal entries, the strictly diagonally dominant tensors with positive diagonal entries, are P-tensors under the new definition, regardless of whether the order is even or odd. In the odd order case, our definition of P0-tensors, to some extent, can be regarded as an extension of positive semi-definite (PSD) tensors. The theoretical applications of P- and P0-tensors under the new definition to tensor complementarity problems and spectral hypergraph theory are also studied.
With the emergence of various tensor data, tensor completion from one-bit measurements has received widespread attention as a fundamental inverse problem. Since tensor rank is a crucial measure of ...the intrinsic structure in many tensor data and its definition is not yet unique, many convex surrogates of tensor rank have been proposed to solve this problem, which owns the merits of computational tractability and reliable theoretical guarantees. In this paper, a novel tensor max-norm is introduced by approximating low-rankness of each frontal slice in a transformed 3-order tensor, and its high-order extension is also discussed. Then, for one-bit tensor completion, an estimator related to the proposed tensor max-norm and another estimator involving the hybrid between tensor max-norm and tensor nuclear-norm are presented, where the first estimator can be considered as a special case of the second estimator. The statistical analysis of upper bounds is also established for recovery error of the two estimators. The theoretical results indicate that the upper bound of the second estimator is superior to the first one with the gap of order \mathcal{O}\big(\sqrt{\log((n_1+n_2)n_3)}\big). In addition, a lower bound of recovery error of the worst-case estimator is provided to show that the two estimators are nearly order-optimal. Furthermore, an algorithm based on the alternating direction multipliers method (ADMM) and semidefinite programming (SDP) is developed to solve the estimation models. The effectiveness of the proposed approach is verified through the simulated experiments and a practical application in recommender-system.