Symmetric Tensor Decompositions On Varieties
Abstract.
This paper discusses the problem of symmetric tensor decomposition on a given variety : decomposing a symmetric tensor into the sum of tensor powers of vectors contained in . In this paper, we first study geometric and algebraic properties of such decomposable tensors, which are crucial to the practical computations of such decompositions. For a given tensor, we also develop a criterion for the existence of a symmetric decomposition on . Secondly and most importantly, we propose a method for computing symmetric tensor decompositions on an arbitrary . As a specific application, Vandermonde decompositions for nonsymmetric tensors can be computed by the proposed algorithm.
Key words and phrases:
symmetric tensor, numerical algorithm, decomposition, generating polynomial, generating matrix2010 Mathematics Subject Classification:
15A69, 65F991. Introduction
Let be integers and be the complex field. Denote by the space of -dimensional complex tensors of order . For and an integral tuple , denotes the th entry of , where . The tensor is symmetric if
whenever is a permutation of . Let be the subspace of all symmetric tensors in . For a vector , denote by the th tensor power of , i.e., is the tensor such that . As shown in [9], for each , there exist vectors such that
(1.1) |
The above is called a symmetric tensor decomposition (STD) for . The smallest such is called the symmetric rank of , for which we denote as . If , is called a rank- tensor and (1.1) is called a symmetric rank decomposition, which is also called a Waring decomposition in some references. The rank of a generic symmetric tensor is given by a formula in Alexander-Hirschowitz [2]. We refer to [9] for symmetric tensors and their symmetric ranks, and refer to [27, 30] for general tensors and their ranks.
This paper concerns symmetric tensor decompositions on a given set. Let be a homogeneous set, i.e., for all and . If each , (1.1) is called a a symmetric tensor decomposition on (STDX) for . The STDX problem has been studied in applications for various choices of . Symmetric tensor decompositions have broad applications in quantum physics [50], algebraic complexity theory [7, 26, 46, 51], numerical analysis [34, 52]. More tensor applications can be found in [24].
When , the STDX is just the classical symmetric tensor decomposition, which has been studied extensively in the literature. Binary tensor (i.e., ) decomposition problems were discussed in [8]. For higher dimensional tensors, the Catalecticant type methods [23] are often used when their ranks are low. For general symmetric tensors, Brachat et al. [5] proposed a method by using Hankel (and truncated Hankel) operators. It is equivalent to computing a new tensor whose order is higher but the rank is the same as the original one. Oeding and Ottaviani [37] proposed to compute symmetric decompositions by Koszul flattening, tensor eigenvectors and vector bundles. Other related work on computing symmetric tensor decompositions can be found in [3, 4]. For generic tensors of certain ranks, the symmetric tensor decomposition is unique. As shown in [16], a generic has a unique Waring decomposition if and only if
When is a generic tensor of a subgeneric rank (i.e., is smaller than the value given by the Alexander-Hirschowitz formula; see [2, 9].) and , the Waring decomposition is unique, with only three exceptions [6].
When , i.e., is the affine cone of a rational normal curve in the projective space , the STDX becomes a Vandermonde decomposition for symmetric tensors. It only exists for Hankel tensors, which were introduced in [38] for studying the harmonic retrieval problem. Hankel tensors were discussed in [39]. Relations among various ranks of Hankel tensors were studied in [35]. More applications of Hankel tensors can be found in [44, 49].
When with , i.e., is a Segre variety, the STDX becomes a Vandermonde decomposition for nonsymmetric tensors. This has broad applications in signal processing [12, 29, 36, 47]. Vandermonde decompositions for nonsymmetric tensors are closely related to secant varieties of Segre-Veronese varieties, which has been studied vastly [1, 25, 40]. In the subsection 3.3, we will discuss this question with more details. Interesting, it can be shown that every nonsymmetric tensor has a Vandermonde decomposition, which is different from the symmetric case.
Contributions
This paper focuses on computing symmetric tensor decompositions on a given set . We assume that is a variety that is given by homogeneous polynomial equations. Generally, symmetric tensor decompositions on can be theoretically studied by secant varieties of the Veronese embedding of [28, 42, 43]. From this view, one may expect to get polynomials which characterize the symmetric -rank of a given symmetric tensor. In this paper, we give a method for computing symmetric -rank decompositions. It is based on the tool of generating polynomials that were recently introduced in [33]. The work [33] only discussed the case . When is a variety, i.e., the method in [33] does not work, because is required in (1.1). We need to modify the approach in [33], by posing additional conditions on generating polynomials. For this purpose, we give a unified framework for computing symmetric tensor decompositions on , which sheds light on the study of both theoretical and computational aspects of tensor decompositions.
The paper is organized as follows. Section 2 gives some basics for tensor decompositions. Section 3 studies some properties of symmetric tensor decompositons on . Section 4 defines generating polynomials and generating matrices. It gives conditions ensuring that the computed vectors belong to the given set in symmetric tensor decompositions. Section 5 presents a unified framework to do the computation. Last, Section 6 gives various examples to show how the proposed method works.
2. Preliminaries
Notation The symbol (resp., , ) denotes the set of nonnegative integers (resp., real, complex numbers). For , define . For a degree , denote the index set
For a real number , we denote by the smallest integer such that . Let and denote the ring of polynomials in and with complex coefficients. For a degree , denotes the subset of polynomials whose degrees are less than or equal to , and denotes the subset of forms whose degrees are equal to . The cardinality of a finite set is denoted as . For a finite set and a vector , denote
(2.1) |
the vector of polynomials in evaluated at . For a complex matrix , denotes its transpose and denotes its conjugate transpose. For a complex vector , denotes the standard Euclidean norm. The denotes the standard -th unit vector in . For two square matrices of the same dimension, their commutator is .
2.1. Equivalent descriptions for symmetric tensors
There is a one-to-one correspondence between symmetric tensors in and homogeneous polynomials of degree and in variables. When is symmetric, we can equivalently use to label in the way that
(2.2) |
The symmetry guarantees that the labelling is well-defined. For , define the homogeneous polynomial (i.e., a form) in and of degree :
(2.3) |
If is labeled as in (2.2), then
where The decomposition (1.1) is equivalent to
For a polynomial and , define the operation
(2.4) |
where is labeled as in (2.2). For fixed , is a linear functional on , while for fixed , is a linear functional on .
2.2. Algebraic varieties
A set is an ideal if and . For polynomials , let denote the smallest ideal that contains . A subset is called an affine variety if is the set of common zeros of some polynomials in . The vanishing ideal of is the ideal consisting of all polynomials identically vanish on .
Two nonzero vectors in are equivalent if they are parallel to each other. Denote by the set of all nonzero vectors that are equivalent to ; the set is called the equivalent class of . The set of all equivalent classes with is the projective space , or equivalently, . A subset is said to be a projective variety if there are homogeneous polynomials such that
The vanishing ideal is defined to be the ideal consisting of all polynomials identically vanish on . For each nonnegative integer , we denote by the linear subspace of polynomials of degree in . A projective variety is said to be nondegenerate if is not contained in any proper linear subspace of , i.e., .
In the Zariski topology for and , the closed sets are varieties and the open sets are complements of varieties. For a projective variety , its Hilbert function is defined as As in [11, 20, 21], when is sufficiently large, is a polynomial and
where denotes terms of order at most .
For an affine variety , we denote by the projective set of equivalent classes of nonzero vectors in , i.e., If , then is called the affine cone of . Clearly, the vanishing ideal of is the same as that of , i.e., . For a degree , the set
is the space of all forms of degree vanishing on .
Veronese maps
For an affine variety , let be the image of under the Veronese map:
Note that , where is a primitive -th root of . Therefore, the dimension of is the same as that of . In particular, for
the set is a variety of tensors that is defined by the equations
In the above, the tensors in are labelled by vectors in . The projectivization is called the rational normal curve in the projective space . For a projective variety , the Veronese embedding map is defined in the same way as
Note that for every affine variety .
Segre varieties
For projective spaces , their Segre product, denoted as , is the image of the Segre map:
The dimension of is the sum . The Segre product is defined by equations of the form
Here, tensors in are labelled by integral tuples in .
Secant varieties
Let be an affine variety and let be its image under the -th Veronese map . Define the set
The Zariski closure of , which we denote as , is called the th secant variety of . The closure is an affine variety in , while is usually not. However, it holds that
because is a dense subset of in the Zariski topology. When is replaced by a general variety , the sets and can be defined in the same way. We refer to [27] for secant varieties.
3. Properties of STDX
Let be a set that is given by homogeneous polynomial equations. For a given tensor , a symmetric -decomposition on is
(3.1) |
The smallest such is called the symmetric -rank of , which we denote as , or equivalently,
(3.2) |
When is the smallest, (3.1) is called a rank-retaining symmetric -decomposition for . It is possible that the decomposition (3.1) does not exist; for such a case, we define . For instance, a symmetric tensor admits a Vandermonde decomposition if and only if is a Hankel tensor. So, if is not Hankel, then . Interested readers are referred to [35, 39] for more details. We denote by the subspace of tensors which admit a symmetric -decomposition as in (3.1). As a counterpart for symmetric border rank, the symmetric border -rank of is similarly defined as
(3.3) |
where is the secant variety of , defined in Subsection 2. When is an irreducible variety, is also equal to the smallest integer such that is the limit of a sequence of tensors whose symmetric -rank is (see [27, Sec. 5.1.1] or [32, Theorem 2.33]). The generic symmetric -rank of is the smallest such that . When , the symmetric -rank becomes the usual symmetric rank (or Waring rank). If is the rational normal curve, the symmetric -rank becomes the Vandermonde rank for Hankel tensors [35, 39]. How to characterize tensors that has a symmetric -decomposition as in (3.1)? How to tell or ? These questions are the focus of this section.
3.1. Existence of symmetric -decompositions
Let be the projectivization of . Its vanishing ideal is , the ideal of polynomials that are identically zero on . The section of degree- forms in is
Let . Choose a basis for . Let be linear functionals on such that
(3.4) |
They are linearly independent functions on . If we use to denote the dehomogenization of , i.e., , then for all . See (2.4) for the definition of the operation .
Proposition 3.1.
Let be as above. Then, a tensor belongs to if and only if . Consequently, the codimension of is , i.e.,
Proof.
Clearly, if , then . Next, we prove the converse is also true. Suppose is such that . To show , it is enough to show that: if is a linear function vanishing on then . Each vanishes on and are linearly independent as vectors in . (The superscript ∗ denotes the dual space.) Note that and it vanishes on . So, there is a form such that for all . Since on , also vanishes on . So, is a linear combination of , and hence is a linear combination of . This implies that and .
Since are linearly independent, the subspace are defined by linearly independent linear equations. So its codimension is . Since the dimension of is , the dimension of follows from the codimension. ∎
The first part of Proposition 3.1 is a high dimensional analogue of the apolarity lemma, which can be found in [23, Theorem 5.3], [41, Section 1.3], and [48, Section 3]). For convenience of referencing, we state this result here and give a straightforward proof. We would like to thank Zach Teitler for pointing out the relationship between Proposition 3.1 and the apolarity Lemma.
By Proposition 3.1, we get the following algorithm for checking if or not. Suppose the vanishing ideal is generated by the forms , with degrees respectively.
Algorithm 3.2.
For a given tensor , do the following:
-
Step 1:
Find the integer such that 111Here we adopt the convention that and .
-
Step 2:
For each and , let
-
Step 3:
Check whether or not for all and in Step 2. If it is, then ; otherwise, .
The above algorithm can be easily applied to detect tensors in . For instance, if is defined by linear equations (), then if and only if for all and ,
If is a hypersurface defined by a single homogeneous polynomial with , then if and only if
for all monomials with .
When does , i.e., when does every tensor admit a symmetric -decomposition? By Proposition 3.1, we get the following characterization.
Corollary 3.3.
Let be as above. Then, the equality holds if and only if , which is equivalent to that is not contained in any hypersurface of degree .
The above corollary implies that if is a hypersurface of degree bigger than , then every tensor in has a symmetric -decomposition. Moreover, if , then obviously we have for any , which implies the well known fact [9] that every symmetric tensor admits a symmetric decomposition.
3.2. The dimension and expected rank
By Proposition 3.1, where is the Hilbert function for the projective variety . For the Veronese map , we have . Therefore, the expected dimension of the secant variety :
The expected generic symmetric -rank of is therefore
(3.5) |
Example 3.4.
(i) If , the Segre variety, then . Its Hilbert function [20, Example 18.15]. So
(ii) If is a hypersurface defined by a form of degree , the Hilbert function of is
Then, and the can be obtained accordingly.
When is a curve, we can get the dimension of as follows.
Proposition 3.5.
If is a non-degenerate curve (i.e., and is not contained in any proper linear subspace of ), then
Therefore, the generic symmetric -rank is . Moreover, if is nonsingular of genus and degree , then there exists an integer such that for all ,
Proof.
By [20, Example 13.7], if is a space curve of degree , i.e., is a curve in which intersects a generic plane in points, then in Proposition 3.5. For an arbitrary curve , however, not much is known about . The Hilbert functions are also known for Veronese varieties, Grassmann varieties and flag varieties, see [20, 19]. Hence the expected value of the generic rank for them may be calculated by (3.5).
3.3. Vandermonde decompositions for nonsymmetric tensors
A nonsymmetric tensor is said to admit a Vandermonde decomposition if there exist vectors (, )
such that
The smallest integer in the above is called the Vandermonde rank of , which we denote as . Since we can rewrite (3.6) equivalently as
(3.6) |
The Vandermonde decomposition can be thought of as a symmetric tensor decomposition on the set such that
The variety is exactly the Zariski closure of tensors whose Vandermonde ranks at most . The vanishing ideal of the Segre variety is generated by minors of its flattenings [18]. So for all . By Corollary 3.3, is a proper subspace of . However, every has a Vandermonde decomposition.
Theorem 3.6.
Every tensor in has a Vandermonde decomposition.
Proof.
Each admits a general tensor decomposition, say,
for vectors . Choose distinct numbers and let
for . Clearly, are linearly independent and they span . For each , there exists numbers such that
Plugging the above expression of in the decomposition for , we get
for some scalars . ∎
4. Generating polynomials
Assume that is a set defined by homogeneous polynomial equations. For a given tensor with , we discuss how to compute the symmetric -decomposition
(4.1) |
Suppose is given as
(4.2) |
with homogeneous polynomials in . Denote . The dehomogenization of is the set
(4.3) |
If each , then the decomposition (4.1) is equivalent to that
(4.4) |
for scalars and vectors . In fact, they are
The assumption that is generic. For the rare case that it is zero, we can apply a generic coordinate change for so that this assumption holds. Throughout this section, we assume the rank is given.
4.1. Generating polynomials
Denote the quotient ring , where is the vanishing ideal of . The is also called the coordinate ring of . We list monomials in with respect to the graded lexicographic order, i.e.,
We choose to be the set of first monomials, whose images in are linearly independent, say,
The border set of is The boundary of is
For a matrix , we label it as
For each , denote the polynomial in
(4.5) |
Denote the tuple of all such polynomials as
(4.6) |
Let be the ideal generated by . The set is called a border basis of with respect to . We refer to [45, 14] for border sets and border bases.
In the following, we give the definition of generating polynomials which were introduced in [33]. For , the polynomial is called a generating polynomial for if
(4.7) |
(See (2.4) for the operation .) If is a generating polynomial for all , then is called a generating matrix for . The set of all generating matrices for is denoted as . The condition (4.7) is equivalent to that
(4.8) |
We use to denote the th column of . Then (4.8) can be rewritten as
where are given as
The solutions to (4.8) can be parameterized as , where is a solution to (4.8), is a basis matrix for the nullspace, and is the vector of free parameters. So, every generating matrix can be parameterized as
(4.9) |
where is a constant matrix and . The following is an example of parameterizing .
Example 4.1.
Consider the tensor such that
For , if we choose , then
The and can be formulated accordingly. For instance,
The generating matrix is uniquely determined, i.e.,
For , if we choose then The generating matrix has parameters, i.e.,
4.2. Polynomial division by
From now on, suppose the vanishing ideal
generated by . For , denote by the normal form of with respect to , i.e., is the remainder of divided by , obtained by the Border Division Algorithm [22, Proposition 6.4.11]. Here we use the Formally, is the polynomial such that , the ideal generated by polynomials in . Note that is a polynomial in whose coefficients are parametrized by the entries of .
Proposition 4.2.
Suppose is the set of first monomials222We remark that instead of the first monomials, one may choose any monomials which are connected to one. Interested readers are referred to [31] for a detailed discussion. For simplicity, we use in this paper the set of first monomials, which is obviously connected to one. This choice of basis will be convenient in practical computations like those in Section 6. in the graded lexicographic ordering such that their images in are linearly independent. Then, a polynomial belongs to if and only if .
Proof.
By the construction, is a border basis of , so it contains a Gröbner basis (say, ) of with respect to the graded lexicographic ordering. Indeed, those elements in which are associated to the corner of form a Gröbner basis. See [45, Proposition 2.30] or [14, Proposition 4.3.9] for definition of the corner of and more details of the proof. Therefore, if and only if . Since , the normal form of with respect to is the same as the normal form with respect to , which is . Therefore, if and only if . ∎
The condition is equivalent to that its coefficients are zeros identically. The coefficients of are polynomials in . Their degrees can be bounded as follows. For a degree , define the set recursively as
The monomials in generate a subspace, which we denote as .
Lemma 4.3.
Let and be as above. Write , where and is a polynomial whose monomials are not contained in any . If , then the coefficients of are polynomials of degree at most in .
Proof.
Note that , because is not reducible by . The coefficients of do not depend on .
For the case , we can write as
with coefficients . The reduction of by is equivalent to that
where is affine linear in the entries of and are affine linear in the coefficients of . So, the coefficients of are affine linear in .
For the case , we can write each as
with and coefficients . For each , denote by the smallest such that . Then
By induction, the coeffieints of are of degree in . ∎
4.3. Commutativity conditions
For each and , define the matrix as
They are also called multiplication matrices for the ideal .
Theorem 4.5.
Let be the set of first monomials whose images in are linearly independent. Then, for , the polynomial system
(4.10) |
has solutions (counting multipliciites) and they belong to if and only if
(4.11) |
(4.12) |
where .
5. An algorithm for computing STDXs
Let be the varieties as in (4.2) and (4.3). This section discusses how to compute a symmetric tensor decomposition on for a given tensor whose symmetric -rank is . To compute (4.1), it is enough to compute
(5.1) |
for vectors and scalars .
Choose to be the set of first monomials that are linearly independent in . For a point , denote by the column vector whose th entry is . Denote the Zariski open subset of
Recall that is the tuple of generating polynomials as in (4.6) and denotes the set of generating matrices for .
Theorem 5.1.
Proof.
The conclusions mostly follow from Theorem 3.2 of [33]. The difference is that we additionally require the points . ∎
According to Theorem 5.1, to compute a symmetric -rank decomposition for , we need to find a generating matrix such that
-
(1)
has distinct zeros.
-
(2)
The zeros of are contained in , i.e., .
Conditions and are equivalent to (4.11) and (4.12) by Theorem 5.1. Suppose the vanishing ideal . We have the following algorithm.
Algorithm 5.2.
For a given with , do the following:
-
Step 0
Choose the set of first monomials that are linearly independent in the quotient ring , with respect to the graded lexicographic monomial order.
-
Step 1
Parameterize the generating matrix as in (4.9).
-
Step 2
For each , compute with respect to .
-
Step 3
Compute a solution of the polynomial system
(5.2) -
Step 4
Compute zeros of the polynomial system
-
Step 5
Determine scalars satisfying (5.1).
The major task of Algorithm 5.2 is in Step 3. It requires to solve a set of polynomial system, which is given by commutative equations and normal forms. The commutative equations are quadratic in the parameter . The equations are polynomial in , whose degrees are bounded in Lemma 4.3. One can apply the existing symbolic or numerical methods for solving polynomial equations. In Step 4, the polynomials have special structures. One can get a Gröbner basis quickly, hence their zeros can be computed efficiently. We refer to [10] and [33, Sec. 2.4] for how to compute their common zeros.
Remark 5.3.
In Algorithm 5.2, we need to know a value of , with . Typically, such a is not known. In practice, we can choose heuristically. For instance, we can choose to be the expected generic rank given in the subsection 3.2. If the flattening matrices of have low ranks, we can choose to be the maximum of their ranks. For any case, if the system (5.2) cannot be solved, we can increase the value of and repeat the algorithm.
We conclude this section with an example of applying Algorithm 5.2.
Example 5.4.
Let be the same tensor as in Example 4.1. Let be the set whose dehomogenization is the surface whose defining ideal is . The maximum rank of flattening matrices of is , so we apply Algorithm 5.2 with . Choose . From the calculation in Example 4.1, we can get
The matrices and commute. The generating polynomials are
They have common zeros. There are no radical formulae for them, but they can be numerically evaluated as
The given symmetric -decomposition for as
Because of numerical errors, we do not have exactly, but the round-off error .
6. Numerical experiments
In this section, we present examples of applying Algorithm 5.2 to compute symmetrix -rank decompositions. The computation is implemented in a laptop with a 2.5 GHz Intel Core i7 processor. The software for carrying out numerical experiments is Maple 2017. We solve the system (5.2) by the built-in function fsolve directly. The algorithm returns a decomposition
Because of round-off errors, the equation does not hold exactly. We use the absolute error or the relative one to verify the correctness. Here, the Hilbert-Schmidt norm of is used, i.e.,
We display the computed decompositions by showing
For neatness, only four decimal digits are shown, and denotes the unit pure imaginary number. If the real or imaginary part of a complex number is smaller than , we treat it as zero and do not display it, for cleanness of the paper. To apply Algorithm 5.2, we need a value for the rank . This issue is discussed in Remark 5.3. In our computation, we initially choose to be the maximum rank of the flattening matrices of the given tensor . If the equations (4.8) or (5.2) are inconsistent, we need to increase the value of by one, until Algorithm 5.2 successfully returns a decomposition.
For the set dehomogenized from as in (4.3), we need generators of its vanishing ideal . For some , it is easy to compute the generators of ; for some , it may be difficult to compute them. This is a classical, standard problem in symbolic computation. We refer to [13, 15, 17] for the related work. So, we do not focus on how to compute generators of in this paper. In our examples, the generators of are known or can be computed easily.
First we illustrate how to apply Algorithm 3.2 to detect the existence of the STDX for a given and .
Example 6.1.
We consider that is given as
and that is defined by , . According to Algorithm 3.2, we have
We let be and hence we have . It is straightforward to verify that and hence has no STDX for . Another example is
and is defined by . By Algorithm 3.2 again, we can show easily that does not admit an STDX for such .
The resting examples in this section are devoted to exhibit the validity and efficiency of Algorithm 5.2. To this end, we make the following convention on the representations of tensors. Recall that a tensor is an array of numbers whose elements are indexed by , i.e., , where . As in Section 2.1, we can equivalently represent by ’s, where satisfies . To be more precise, for each element in , we have
where is the sequence such that and . We may list the entries with respect to the lexicographic order, i.e., precedes if and only if the most left nonzero entry of is positive. For instance, a binary cubic tensor can be displayed as . In the following examples, we will represent a symmetric tensor in this way.
Example 6.2.
Let be the parabola defined by . Let be the symmetric tensor such that
The vanishing ideal of is
The maximum rank of flattening matrices of is . When we run Algorithm 5.2 with , it does not give a desired tensor decomposition. So, we apply Algorithm 5.2 with and the symmetric -decomposition with
We have and the error .
Example 6.3.
Let be the nodal curve defined by The vanishing ideal of is
Let be the symmetric tensor such that
The maximum rank of flattening matrices of is . When we run Algorithm 5.2 with , it fails to give a desired tensor decomposition. So, we apply Algorithm 5.2 with and get the symmetric -decomposition with
We have and the error .
Example 6.4.
Let be the union of two planes defined by . Let be the symmetric tensor such that
The vanishing ideal of is
The maximum rank of flattening matrices of is . When we run Algorithm 5.2 with , it fails to give a desired tensor decomposition. So, we use and apply Algorithm 5.2. It returns the symmetric -decomposition
We have and the error .
Example 6.5.
Let be the surface defined by . Then is the monkey saddle surface whose vanishing idea is
Let be the symmetric tensor such that
The maximum rank of flattening matrices of is . When we run Algorithm 5.2 with , it fails to give a desired tensor decomposition. So, we apply Algorithm 5.2 with and get the symmetric -decomposition
In the next two examples, we still display the tensor entries according to the lexicographic order, but we drop the labelling indices, for cleanness of the paper.
Example 6.6.
Let be the curve defined by
The vanishing ideal of is then
Let be the symmetric tensor whose entries are
The maximum rank of flattening matrices of is . When we run Algorithm 5.2 with , it fails to give a desired tensor decomposition. So, we apply Algorithm 5.2 with and get the symmetric -decomposition
We have the norm and the error .
Example 6.7.
Let be the surface defined by
Then the vanishing ideal of the variety is
Let be the symmetric tensor whose entries are respectively
The maximum rank of flattening matrices of is . When we run Algorithm 5.2 with , we get the symmetric -decomposition
We conclude this section by considering various examples on Segre varieties.
Example 6.8.
(Vandermonde decompositions of nonsymmetric tensors) Each tensor has a Vandermonde decomposition as in (3.6), which is proved in Theorem 3.6. We can view as a tensor in with the set such that ( is repeated times). Let
A vector can be labelled as , with binary vectors . Under this labelling, the set is defined by the homogeneous equations (c.f. [20, Example 2.11]) or [27, Section 4.3.5])
(6.1) |
for all such that for some ,
Under the dehomogenization , the corresponding affine variety consists of vectors , labelled as with , defined by the vanishing ideal
where are as above and .
We apply Algorithm 5.2 to compute Vandermonde decompositions for random whose entries are randomly generated (obeying the normal distribution). For all the instances, Algorithm 5.2 successfully got Vandermonde rank decompositions. The relative errors are all in the magnitude of . The consumed computtaional time (in seconds) is also reported. The results are displayed in Table 1.
time | time | ||||||||
---|---|---|---|---|---|---|---|---|---|
2 | 3 | 3 | 4 | 7.07 | 3 | 7 | 3 | 7 | 7.45 |
2 | 3 | 4 | 4 | 7.08 | 3 | 7 | 3 | 9 | 9.74 |
2 | 3 | 4 | 5 | 7.11 | 3 | 7 | 3 | 10 | 19.04 |
2 | 3 | 4 | 6 | 7.31 | 3 | 7 | 4 | 8 | 6.85 |
2 | 3 | 4 | 7 | 7.45 | 3 | 7 | 4 | 9 | 6.88 |
2 | 3 | 4 | 8 | 8.11 | 3 | 7 | 4 | 10 | 6.54 |
2 | 3 | 5 | 5 | 7.10 | 3 | 7 | 4 | 11 | 8.35 |
2 | 3 | 5 | 6 | 7.08 | 3 | 7 | 4 | 12 | 11.37 |
2 | 3 | 5 | 7 | 7.22 | 3 | 7 | 4 | 13 | 22.84 |
2 | 3 | 5 | 8 | 7.17 | 3 | 7 | 4 | 14 | 41.45 |
2 | 3 | 5 | 9 | 7.37 | 3 | 7 | 4 | 15 | 69.26 |
2 | 3 | 5 | 10 | 7.76 | 3 | 7 | 4 | 16 | 134.47 |
2 | 3 | 5 | 11 | 8.87 | 3 | 7 | 4 | 17 | 224.97 |
2 | 3 | 6 | 12 | 7.84 | 3 | 7 | 4 | 18 | 382.00 |
2 | 3 | 6 | 13 | 7.97 | 3 | 7 | 5 | 19 | 8.35 |
2 | 3 | 6 | 14 | 8.50 | 3 | 7 | 5 | 20 | 8.45 |
2 | 3 | 6 | 15 | 9.26 | 3 | 7 | 5 | 21 | 8.65 |
2 | 3 | 6 | 16 | 12.33 | 3 | 7 | 5 | 22 | 8.65 |
2 | 3 | 7 | 17 | 8.27 | 3 | 7 | 5 | 23 | 8.80 |
2 | 3 | 7 | 18 | 12.11 | 3 | 7 | 5 | 24 | 9.08 |
2 | 3 | 7 | 19 | 34.24 | 3 | 7 | 5 | 25 | 8.96 |
2 | 3 | 7 | 20 | 722.98 | 3 | 7 | 6 | 26 | 9.10 |
7. Conclusion
In this paper, we discuss how to compute symmetric -decompositions of symmetric tensors on a given variety . The tool of generating polynomial is used to do the computation. Based on that, give an algorithm for computing symmetric -decompositions. Various examples are given to demonstrate the correctness and efficiency of the proposed method.
References
- [1] H. Abo and M. Brambilla. On the dimensions of secant varieties of segre-veronese varieties. Annali di Matematica Pura ed Applicata, 192(1):61–92, 2013.
- [2] J. Alexander and A. Hirschowitz. Polynomial interpolation in several variables. Journal of Algebraic Geometry, 4(1995), pp. 201-22.
- [3] E. Balllico and A. Bernardi. Decomposition of homogeneous polynomials with low rank. Math. Z. 271, 1141-1149, 2012.
- [4] A. Bernardi, A. Gimigliano and M. Idà. Computing symmetric rank for symmetric tensors. Journal of Symbolic Computation 46, (2011), 34-53.
- [5] J. Brachat, P. Comon, B. Mourrain, and E. Tsigaridas. Symmetric tensor decomposition. Linear Algebra and its Applications, 433(11):1851–1872, 2010.
- [6] L. Chiantini, G. Ottaviani, and N. Vannieuwenhoven. On generic identifiability of symmetric tensors of subgeneric rank. Trans. Amer. Math. Soc., 369 (2017), 4021-4042.
- [7] L. Chiantini, J. Hauenstein, C. Ikenmeyer, J. Landsberg, and G. Ottaviani. Polynomials and the exponent of matrix multiplication. arXiv preprint arXiv:1706.05074, 2017.
- [8] G. Comas and M. Seiguer. On the rank of a binary form. Foundations of Computational Mathematics, Vol. 11, No. 1, pp. 65-78, 2011.
- [9] P. Comon, G. Golub, L.-H. Lim, and B. Mourrain. Symmetric tensors and symmetric tensor rank. SIAM Journal on Matrix Analysis and Applications, 30(3):1254–1279, 2008.
- [10] R. Corless, P. Gianni, and B. Trager. A reordered schur factorization method for zero-dimensional polynomial systems with multiple roots. In International Symposium on Symbolic and Algebraic Computation, pp. 133–140, 1997.
- [11] D. Cox, J. Little and D. O’Shea. Ideals, varieties, and algorithms: an introduction to computational algebraic geometry and commutative algebra, Springer, 2007.
- [12] L. De Lathauwer and B. De Moor. From matrix to tensor: Multilinear algebra and signal processing. In Institute of Mathematics and Its Applications Conference Series, vol. 67, pp. 1–16, 1998.
- [13] D. Eisenbud, H. Craig and V. Wolmer. Direct methods for primary decomposition.. Inventiones mathematicae, 110.1 (1992): 207-235.
- [14] A. Kehrein, M. Kreuzer and L. Robbiano. An algebraist’s view on border bases. In: A. Dickenstein et al. (eds) Solving Polynomial Equations: Foundations, Algorithms, and Applications. Algorithms and Computation in Mathematics, vol. 14, pp. 169–202. Springer-Verlag, New York-Heidelberg, 2005.
- [15] E. Fortuna, P. Gianni and B. Trager. Derivations and radicals of polynomial ideals over fields of arbitrary characteristic. Journal of Symbolic Computation, 33.5 (2002): 609-625.
- [16] F. Galuppi and M. Mella. Identifiability of homogeneous polynomials and Cremona Transformations. Preprint, 2016. arXiv:1606.06895v2[math.AG]
- [17] T. Gianni, T. Barry, and Z. Gail. Gröbner bases and primary decomposition of polynomial ideals.. Journal of Symbolic Computation, 6.2-3 (1988): 149-167.
- [18] R. Grone. Decomposable tensors as a quadratic variety. Proceedings of the American Mathematical Society, 64(2):227–230, 1977.
- [19] B. Gross and N. Wallach. On the Hilbert polynomials and Hilbert series of homogeneous projective varieties. Arithmetic geometry and automorphic forms 19 (2011): 253-263.
- [20] J. Harris. Algebraic geometry, Graduate Texts in Mathematics, vol. 133. Springer-Verlag, New York, 1992.
- [21] R. Hartshorne. Algebraic geometry. Graduate Texts in Mathematics, vol. 52. Springer-Verlag, New York-Heidelberg, 1977.
- [22] M. Kreuzer and L. Robbiano. Computational commutative algebra 2. Springer Science & Business Media, 2005.
- [23] A. Iarrobino and V. Kanev. Power Sums, Gorenstein algebras, and determinantal varieties. Lecture Notes in Mathematics #1721, Springer, 1999.
- [24] T. Kolda and B. Bader. Tensor decompositions and applications. SIAM Rev. vol. 51, no. 3, 455-500, 2009.
- [25] A. Laface and E. Postinghel. Secant varieties of segre-veronese embeddings of . Mathematische Annalen, 356(4):1455–1470, 2013.
- [26] J. Landsberg. The border rank of the multiplication of matrices is seven. Journal of the American Mathematical Society, 19(2):447–459, 2006.
- [27] J. Landsberg. Tensors: geometry and applications. Graduate Studies in Mathematics, 128, AMS, Providence, RI, 2012.
- [28] J. Landsberg and G. Ottaviani. Equations for secant varieties of veronese and other varieties. Annali di Matematica Pura ed Applicata, 192(4):569–606, 2013.
- [29] L.-H. Lim and P. Comon. Multiarray signal processing: Tensor decomposition meets compressed sensing. Comptes Rendus Mecanique, 338(6):311–320, 2010.
- [30] L.-H. Lim. Tensors and hypermatrices, in: L. Hogben (Ed.), Handbook of linear algebra, 2nd Ed., CRC Press, Boca Raton, FL, 2013.
- [31] B. Mourrain, A new criterion for normal form algorithms. International Symposium on Applied Algebra, Algebraic Algorithms, and Error-Correcting Codes, Springer, Berlin, Heidelberg, 1999.
- [32] D. Mumford. Algebraic Geometry I: Complex Projective Varieties. Springer Verlag, Berlin, 1995.
- [33] J. Nie. Generating polynomials and symmetric tensor decompositions. Foundations of Computational Mathematics, 17(2):423–465, 2017.
- [34] J. Nie. Low rank symmetric tensor approximations, SIAM J. Matrix Anal. Appl., 38 (no. 4), pp. 1517–1540, 2017.
- [35] J. Nie and K. Ye. Hankel tensor decompositions and ranks. SIAM J. Matrix Anal. Appl., 40 (no. 2), 486–516, 2019.
- [36] D. Nion and N. Sidiropoulos. Tensor algebra and multidimensional harmonic retrieval in signal processing for mimo radar. IEEE Transactions on Signal Processing, 58(11):5693–5705, 2010.
- [37] L. Oeding and G. Ottaviani. Eigenvectors of tensors and algorithms for waring decomposition. Journal of Symbolic Computation, 54:9–35, 2013.
- [38] J. Papy, L. De Lathauwer, and S. Huffel. Exponential data fitting using multilinear algebra: the single-channel and multi-channel case. Numerical linear algebra with applications, 12(8):809–826, 2005.
- [39] L. Qi. Hankel tensors: Associated hankel matrices and vandermonde decomposition. Communications in Mathematical Sciences, 13(1):113–125, 2015.
- [40] C. Raicu. Secant varieties of segre–veronese varieties. Algebra & Number Theory, 6(8):1817–1868, 2012.
- [41] K. Ranestad,, and F. O. Schreyer. Varieties of sums of powers. Journal für die reine und angewandte Mathematik, (2000): 147-182.
- [42] S. Sam. Ideals of bounded rank symmetric tensors are generated in bounded degree. Inventiones mathematicae, 207(1):1–21, 2017.
- [43] S. Sam. Syzygies of bounded rank symmetric tensors are generated in bounded degree. Mathematische Annalen, 368(3-4):1095–1108, 2017.
- [44] M. Signoretto, L. De Lathauwer, and J. Suykens. A kernel-based framework to tensorial data analysis. Neural networks, 24(8):861–874, 2011.
- [45] H. Stetter. Numerical Polynomial Algebra. SIAM, Philadelphia, 2004.
- [46] V. Strassen. Gaussian elimination is not optimal. Numerische mathematik, 13(4):354–356, 1969.
- [47] W. Sun and H. So. Accurate and computationally efficient tensor-based subspace approach for multidimensional harmonic retrieval. IEEE Transactions on Signal Processing, 60(10):5077–5088, 2012.
- [48] Z. Teitler Sufficient conditions for Strassen’s additivity conjecture. Illinois Journal of Mathematics, 59.4 (2015): 1071-1085.
- [49] S. Trickett, L. Burroughs, A. Milton, et al. Interpolation using hankel tensor completion. In 2013 SEG Annual Meeting. Society of Exploration Geophysicists, 2013.
- [50] W. Uemura and O. Sugino. Symmetric tensor decomposition description of fermionic many-body wave functions. Physical Review Letters, 109(25):253001, 2012.
- [51] K. Ye and L.-H. Lim. Fast structured matrix computations: tensor rank and cohn–umans method. Foundations of Computational Mathematics, 18(1) pp. 45–95, 2018.
- [52] K. Ye and L.-H. Lim. Tensor network ranks. arXiv preprint arXiv:1801.02662, 2018.