The widespread use of multisensor technology and the emergence of big data sets have brought the necessity to develop more versatile tools to represent higher order data with multiple aspects and ...high dimensionality. Data in the form of multidimensional arrays, also referred to as tensors, arise in a variety of applications including chemometrics, hyperspectral imaging, high-resolution videos, neuroimaging, biometrics, and social network analysis. Early multiway data analysis approaches reformatted such tensor data as large vectors or matrices and then resorted to dimensionality reduction methods developed for classical two-way analysis such as principal component analysis (PCA). However, one cannot discover hidden components within multiway data using conventional PCA. To this end, tensor decomposition methods which are flexible in the choice of the constraints and that extract more general latent components have been proposed. In this paper, we review the major tensor decomposition methods with a focus on problems targeted by classical PCA. In particular, we present tensor methods that aim to solve three important challenges typically addressed by PCA: dimensionality reduction, i.e., low-rank tensor approximation; supervised learning, i.e., learning linear subspaces for feature extraction; and robust low-rank tensor recovery. We also provide experimental results to compare different tensor models for both dimensionality reduction and supervised learning applications.
In this paper modified variants of the sparse Fourier transform algorithms from Iwen (2010) 32 are presented which improve on the approximation error bounds of the original algorithms. In addition, ...simple methods for extending the improved sparse Fourier transforms to higher dimensional settings are developed. As a consequence, approximate Fourier transforms are obtained which will identify a near-optimal k-term Fourier series for any given input function, f:0,2πD→C, in O(k2⋅D4) time (neglecting logarithmic factors). Faster randomized Fourier algorithm variants with runtime complexities that scale linearly in the sparsity parameter k are also presented.
Let
M
be a smooth
d
-dimensional submanifold of
R
N
with boundary that’s equipped with the Euclidean (chordal) metric, and choose
m
≤
N
. In this paper we consider the probability that a random ...matrix
A
∈
R
m
×
N
will serve as a bi-Lipschitz function
A
:
M
→
R
m
with bi-Lipschitz constants close to one for three different types of distributions on the
m
×
N
matrices
A
, including two whose realizations are guaranteed to have fast matrix-vector multiplies. In doing so we generalize prior randomized metric space embedding results of this type for submanifolds of
R
N
by allowing for the presence of boundary while also retaining, and in some cases improving, prior lower bounds on the achievable embedding dimensions
m
for which one can expect small distortion with high probability. In particular, motivated by recent modewise embedding constructions for tensor data, herein we present a new class of highly structured distributions on matrices which outperform prior structured matrix distributions for embedding sufficiently low-dimensional submanifolds of
R
N
(with
d
≲
N
) with respect to both achievable embedding dimension, and computationally efficient realizations. As a consequence we are able to present, for example, a general new class of Johnson–Lindenstrauss embedding matrices for
O
(
log
c
N
)
-dimensional submanifolds of
R
N
which enjoy
O
(
N
log
(
log
N
)
)
-time matrix vector multiplications.
In this paper, a deterministic sparse Fourier transform algorithm is presented which breaks the quadratic-in-sparsity runtime bottleneck for a large class of periodic functions exhibiting structured ...frequency support. These functions include, e.g., the often considered set of block frequency sparse functions of the form
f
(
x
)
=
∑
j
=
1
n
∑
k
=
0
B
−
1
c
ω
j
+
k
e
i
(
ω
j
+
k
)
x
,
{
ω
1
,
…
,
ω
n
}
⊂
−
N
2
,
N
2
∩
ℤ
as a simple subclass. Theoretical error bounds in combination with numerical experiments demonstrate that the newly proposed algorithms are both fast and robust to noise. In particular, they outperform standard sparse Fourier transforms in the rapid recovery of block frequency sparse functions of the type above.
In this paper we consider sparse Fourier transform (SFT) algorithms for approximately computing the best
s
-term approximation of the discrete Fourier transform (DFT)
f
^
∈
C
N
of any given input ...vector
f
∈
C
N
in just
s
log
N
O
(
1
)
-time using only a similarly small number of entries of
f
. In particular, we present a deterministic SFT algorithm which is guaranteed to always recover a near best
s
-term approximation of the DFT of any given input vector
f
∈
C
N
in
O
s
2
log
11
2
(
N
)
-time. Unlike previous deterministic results of this kind, our deterministic result holds for both arbitrary vectors
f
∈
C
N
and vector lengths
N
. In addition to these deterministic SFT results, we also develop several new publicly available randomized SFT implementations for approximately computing
f
^
from
f
using the same general techniques. The best of these new implementations is shown to outperform existing discrete sparse Fourier transform methods with respect to both runtime and noise robustness for large vector lengths
N
.
In this paper we develop fast and memory efficient numerical methods for learning functions of many variables that admit sparse representations in terms of general bounded orthonormal tensor product ...bases. Such functions appear in many applications including, e.g., various Uncertainty Quantification (UQ) problems involving the solution of parametric PDE that are approximately sparse in Chebyshev or Legendre product bases (Chkifa et al. in Polynomial approximation via compressed sensing of high-dimensional functions on lower sets.
arXiv:1602.05823
, 2016; Rauhut and Schwab in Math Comput 86(304):661–700, 2017). We expect that our results provide a starting point for a new line of research on sublinear-time solution techniques for UQ applications of the type above which will eventually be able to scale to significantly higher-dimensional problems than what are currently computationally feasible. More concretely, let
B
be a finite Bounded Orthonormal Product Basis (BOPB) of cardinality
|
B
|
=
N
. Herein we will develop methods that rapidly approximate any function
f
that is sparse in the BOPB, that is,
f
:
D
⊂
R
D
→
C
of the form
f
(
x
)
=
∑
b
∈
S
c
b
·
b
(
x
)
with
S
⊂
B
of cardinality
|
S
|
=
s
≪
N
. Our method adapts the CoSaMP algorithm (Needell and Tropp in Appl Comput Harmon Anal 26(3):301–321, 2009) to use additional function samples from
f
along a randomly constructed grid
G
⊂
R
D
with universal approximation properties in order to rapidly identify the multi-indices of the most dominant basis functions in
S
component by component during each CoSaMP iteration. It has a runtime of just
(
s
log
N
)
O
(
1
)
, uses only
(
s
log
N
)
O
(
1
)
function evaluations on the fixed and nonadaptive grid
G
, and requires not more than
(
s
log
N
)
O
(
1
)
bits of memory. We emphasize that nothing about
S
or any of the coefficients
c
b
∈
C
is assumed in advance other than that
S
⊂
B
has
|
S
|
≤
s
. Both
S
and its related coefficients
c
b
will be learned from the given function evaluations by the developed method. For
s
≪
N
, the runtime
(
s
log
N
)
O
(
1
)
will be less than what is required to simply enumerate the elements of the basis
B
; thus our method is the first approach applicable in a general BOPB framework that falls into the class referred to as
sublinear-time
. This and the similarly reduced sample and memory requirements set our algorithm apart from previous works based on standard compressive sensing algorithms such as basis pursuit which typically store and utilize full intermediate basis representations of size
Ω
(
N
)
during the solution process.
In this paper we present the first known deterministic algorithm for the construction of multiple rank-1 lattices for the approximation of periodic functions of many variables. The algorithm works by ...converting a potentially large reconstructing single rank-1 lattice for some
d
-dimensional frequency set
I
⊂{0,…,
N
− 1}
d
into a collection of much smaller rank-1 lattices which allow for accurate and efficient reconstruction of trigonometric polynomials with coefficients in
I
(and, therefore, for the approximation of multivariate periodic functions). The total number of sampling points in the resulting multiple rank-1 lattices is theoretically shown to be less than
O
|
I
|
log
2
(
N
|
I
|
)
with constants independent of
d
, and by performing one-dimensional fast Fourier transforms on samples of trigonometric polynomials with Fourier support in
I
at these points, we obtain exact reconstruction of all Fourier coefficients in fewer than
O
d
|
I
|
log
4
(
N
|
I
|
)
total operations. Additionally, we present a second multiple rank-1 lattice construction algorithm which constructs lattices with even fewer sampling points at the cost of only being able to reconstruct exact trigonometric polynomials rather than having additional theoretical approximation guarantees. Both algorithms are tested numerically and surpass the theoretical bounds. Notably, we observe that the oversampling factors #samples/|
I
| appear to grow only logarithmically in |
I
| for the first algorithm and appear near-optimally bounded by four in the second algorithm.
This paper studies the problem of recovering a signal from one-bit compressed sensing measurements under a manifold model; that is, assuming that the signal lies on or near a manifold of low ...intrinsic dimension. We provide a convex recovery method based on the Geometric Multi-Resolution Analysis and prove recovery guarantees with a near-optimal scaling in the intrinsic manifold dimension. Our method is the first tractable algorithm with such guarantees for this setting. The results are complemented by numerical experiments confirming the validity of our approach.
This paper studies the problem of recovering a signal with a sparse representation in a given orthonormal basis using as few noisy observations as possible. Herein, observations are subject to the ...type of background clutter noise encountered in radar applications. Given this model, this paper proves for the first time that highly sparse signals contaminated with Gaussian background noise can be recovered by adaptive methods using fewer noisy linear measurements than required by any possible recovery method based on nonadaptive Gaussian measurement ensembles.