We have implemented a new way of computing three-point correlation functions. It is based on a factorization of the entire correlation function into two parts which are evaluated with open spin-(and ...to some extent flavor-) indices. This allows us to estimate the two contributions simultaneously for many different initial and final states and momenta, with little computational overhead. We explain this factorization as well as its efficient implementation in a new library which has been written to provide the necessary functionality on modern parallel architectures and on CPUs, including Intel’s Xeon Phi series.
Inspired by a method of La Bretèche relying on some unique factorisation, we generalise work of Blomer, Brüdern and Salberger to prove Manin's conjecture in its strong form conjectured by Peyre for ...some infinite family of varieties of higher dimension. The varieties under consideration in this paper correspond to the singular projective varieties defined by the following equation
$$
x_1 y_2y_3\cdots y_n+x_2y_1y_3 \cdots y_n+ \cdots+x_n y_1 y_2 \cdots y_{n-1}=0
$$
in ℙℚ2n−1 for all n ⩾ 3. This paper comes with an Appendix by Per Salberger constructing a crepant resolution of the above varieties. En s'inspirant d'une méthode due à La Bretèche reposant sur une factorisation unique, nous généralisons des travaux récents de Blomer, Brüdern, et Salberger en établissant la conjecture de Manin sous sa forme forte conjecturée par Peyre pour une famille infinie de variétés en dimension supérieure. Les variétés considérées dans cet article correspondent aux variétés projectives singulières définies par l'équation suivante
$$
x_1 y_2y_3\cdots y_n+x_2y_1y_3 \cdots y_n+ \cdots+x_n y_1 y_2 \cdots y_{n-1}=0
$$
dans ℙℚ2n−1 pour tout n ⩾ 3. Cet article est accompagné d'une Annexe de Per Salberger dans laquelle une résolution crépante des variétés ci-dessus est explicitée.
A 1-factorization of the complete multigraph λ K 2 n is said to be indecomposable if it cannot be represented as the union of 1-factorizations of λ 0 K 2 n and (λ -λ 0 ) K 2 n, where λ 0 <λ. It is ...said to be simple if no 1-factor is repeated. For every n ≥9 and for every (n -2 )/3 ≤λ ≤2 n, we construct an indecomposable 1-factorization of λ K 2 n, which is not simple. These 1-factorizations provide simple and indecomposable 1-factorizations of λ K 2 s for every s ≥18 and 2 ≤λ ≤2 s /2 -1. We also give a generalization of a result by Colbourn et al., which provides a simple and indecomposable 1-factorization of λ K 2 n, where 2 n =p m +1, λ =(p m -1 )/2, p prime.
Factorization Machines with libFM Rendle, Steffen
ACM transactions on intelligent systems and technology,
05/2012, Letnik:
3, Številka:
3
Journal Article
Recenzirano
Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a ...nontrivial task and requires a lot of expert knowledge. Typically, a new model is developed, a learning algorithm is derived, and the approach has to be implemented.
Factorization machines (FM) are a generic approach since they can mimic most factorization models just by feature engineering. This way, factorization machines combine the generality of feature engineering with the superiority of factorization models in estimating interactions between categorical variables of large domain.
libFM
is a software implementation for factorization machines that features stochastic gradient descent (SGD) and alternating least-squares (ALS) optimization, as well as Bayesian inference using Markov Chain Monto Carlo (MCMC). This article summarizes the recent research on factorization machines both in terms of modeling and learning, provides extensions for the ALS and MCMC algorithms, and describes the software tool
libFM
.
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, ...it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
For a suitable irreducible base polynomial of degree k, a family of polynomials depending on f(x) is constructed with the properties:there is exactly one irreducible factor for for each divisor d of ...m; generalizing the factorization of into cyclotomic polynomials;when the base polynomial this coincides with As an application, irreducible polynomials of degree 12, 24, 24 are constructed having Galois groups of order matching their degrees and isomorphic to and respectively.
Substantial progress has been made recently on developing provably accurate and efficient algorithms for low-rank matrix factorization via nonconvex optimization. While conventional wisdom often ...takes a dim view of nonconvex optimization algorithms due to their susceptibility to spurious local minima, simple iterative methods such as gradient descent have been remarkably successful in practice. The theoretical footings, however, had been largely lacking until recently. In this tutorial-style overview, we highlight the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees. We review two contrasting approaches: (1) two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and (2) global landscape analysis and initialization-free algorithms. Several canonical matrix factorization problems are discussed, including but not limited to matrix sensing, phase retrieval, matrix completion, blind deconvolution, and robust principal component analysis. Special care is taken to illustrate the key technical insights underlying their analyses. This article serves as a testament that the integrated consideration of optimization and statistics leads to fruitful research findings.
Suppose that K is a field,
S
=
K
x
1
,
...
,
x
n
or
S
=
K
x
1
,
...
,
x
n
and I is a monomial ideal of S. We study factorization properties of the ring
R
=
S
/
I
. In particular, we present ...conditions equivalent to R being présimplifiable, a bounded factorization ring, a finite factorization ring or a unique factorization ring.