From Signal Space To STP-CS
Abstract
Under the assumption that a finite signal with different sampling lengths or different sampling frequencies is considered as equivalent signals, the signal space is considered as the quotient space of over equivalence. The topological structure and the properties of signal space are investigated. Using them some characteristics of semi-tensor product based compressed sensing (STP-CS) are revealed. Finally, a systematic analysis of the construction of sensing matrix based on balanced incomplete block design (BIBD) is presented.
Index Terms:
Semi-tensor product (STP), compressed sensing (CS), sensing matrix, lossless (lossy) compression/dicompression.I Introduction
The compressed sensing (CS), as a new technique for signal processing, was initially proposed by Donoho, Candes, Tao et al. in 2006 [18, 4]. The basic idea of CS is to compress a signal of dimension through a matrix , , called the sensing matrix, to get a sampled data . That is,
(1) |
The sensing matrix is obtained as follows: Assume the original signal is , and the sampled data is . Then CS tries to design such that . where is with limited number nonzero entries. Then the sensing matrix is obtained as . Since is constructed in such a way, it is possible to recover from sampled date [5, 6].
A fundamental characteristic is the spark of , which is the smallest number of the dependent columns of . Then the follows is a criterion to justify if the original signal can be recovered from sampled data.
Denote by the number of nonzero entries in vector , and the set of with . Then we have the following result.
Hence the CS broken the Nyquist sampling theorem, which says that to avoid aliasing the sample frequency should be greater than twice of signal highest frequency [21].
Construct the sensing matrix is one of the fundamental issues in CS. It has been investigated by many authors. For instance, [16, 15] proposed a technique to construct a structurally random matrix for the sensing matrix. Many deterministic sensing matrices have been constructed [1, 33, 36]. To reduce the storage space of the sensing matrices, many methods have been developed, including orthogonal bases [20], low-rank matrices [26, 3], etc.
Semi-tensor product (STP) of matrices/vectors is a generalization of conventional matrix/vector product, which removes the dimension restriction on factor matrices/vectors of the conventional one[10]. The STP has firstly been used to CS by the pioneer work of [32], where the technique is named as STP-CS. The basic idea for STP-CS is to replace fundamental CS-model (1) by
(2) |
where , and .
There are two obvious advantages of STP-CS: (i) It can reduce the storage of sensing matrix; (ii) it can be used for all dimensional signals as long as .
There are some fundamental characteristics which are used to evaluate the quality of sensing matrices.
Definition I.2
[19] (RIP(restricted isometry property)) Matrix is said to satisfy the -RIP, if there exists , such that
(3) |
When , RIP can ensure a lossless recovering of the signal.
Unfortunately, in general, verifying RIP is difficult. An alternative criterion is the coherence.
Remark I.4
It was proved in [32] that the sensing matrix has exactly the same CS-related properties as .
Proposition I.5
Proposition I.5 provides a theoretical foundation for STP-CS.
Since then, there have been considerable researches for STP-CS. For instance, a further development of STP-CS called the is presented by [27], [25] considered how to construct a random sensing matrix for STP-CS. Some applications shown the efficiency of STP-CS. For instance, the design of one-bit compressed sensing by STP-CS was proposed by [22], [7] uses STP-CS for secure medical image transmission; the problem of reducing storage space of sensing matrix using STP-CS was investigated by [30]; a secure image encryption scheme by using STP-CS is proposed by [31]; [23] considered the storage constrained smart meter sensing using STP-CS; [35] proposed a reversible image hiding algorithm using STP-CS, etc.
The purpose of this paper is threefold: (1) Propose the structure of signal space. under the assumption that a finite signal with different sampling lengths or different sampling frequencies is considered as equivalent signals, the signal space is considered as the quotient space of over equivalence. The structure of signal space provides a solid framework for design of STP-CS. (2) Reveal some fundamental properties of STP-CS. (3) Propose a new technique for constructing sensing matrix based on BIBD.
The rest of this paper is organized as follows. Section 2 is a systematic review on both left and right STPs, their quotient spaces, and their vector space structure. Section 3 investigates the topological structure of the signal space, which is the quotient space of cross-dimensional Euclidian space, and its topology is deduced from distance. In Section 4 the signal space is embedded into various functional spaces, including Frėchet space, Balach space, and Hilbert space. These functional spaces provide different structures for signal space, which reveals related properties of signal space. Section 5 provides a generalized dimension free STP-CS, which contains the STP-CS in current literature as its spacial case. An precise lossless recovering condition for STP-CS is obtained. Section 6 proposes a new constructing technique for balanced incomplete block design (BIBD) based design of sensing matrix. Finally, Section 7 is a brief conclusion.
Before ending this section we give a list of notations.
-
•
: The set of real numbers.
-
•
() : The set of (positive) rational numbers.
-
•
(): Set of (positive) integers.
-
•
(): and ().
-
•
: .
-
•
: set of real matrices.
-
•
where .
-
•
-
•
: the set of integers , where .
-
•
: the smallest integer upper bound of , i.e., the smallest integer .
-
•
: the least common multiple of and .
-
•
: the greatest common divisor of and .
-
•
: the -th column of the identity matrix .
-
•
.
-
•
.
-
•
: subspace or dual subspace generated by .
-
•
: semi-tensor addition.
-
•
: (left) matrix-matrix semi-tensor product, which is the default one.
-
•
: right semi-tensor product of matrices.
-
•
(): left (right) matrix-vector semi-tensor product.
-
•
: set of dimensional vector with less than or equal to non-zero entries.
-
•
: set of dimensional vector with less than or equal to non-zero entries per length.
-
•
: nonzero entries of vector (called the degree of ).
-
•
: the smallest number of dependent columns of .
-
•
: coherence of .
II Left and Right STP - Preliminaries
Definition II.1
[10] Assume , , , , and .
-
(i)
The left matrix-matrix (MM-) STP is defined by
(9) -
(ii)
The right MM-STP is defined by
(10) -
(iii)
The left matrix-vector (MV-) STP is defined by
(11) -
(iv)
The right MV-STP is defined by
(12)
Denote the set of finite-dimensional matrices by
And the set of finite-dimensional vectors by
The basic properties of STPs are listed as follows:
Proposition II.2
[10] Let , .
-
(1)
(Consistency)
-
(i)
When , the MM-STPs are degenerated to conventional MM-product. That is,
(13) -
(ii)
When , the MV-STPs are degenerated to conventional MV-product. That is,
(14)
-
(i)
-
(2)
(Associativity)
-
(i)
(15) -
(ii)
(16) -
(iii)
(17) -
(iv)
(18)
-
(i)
-
(3)
(Distributivity)
-
(i)
(21) -
(ii)
(24) -
(iii)
(27) -
(iv)
(30)
-
(i)
Note that when the addition of two matrices (vectors) appears to the equalities in Proposition II.2, the dimensions of two adding members must be the same.
Definition II.3
-
(i)
Two matrices and are said to be left equivalent, denoted by , if there exist two identities and such that
(31) The left equivalence class of is denoted by
-
(ii)
Two matrices and are said to be right equivalent, denoted by , it there exists two identities and such that
(32) The right equivalence class of is denoted by
-
(iii)
Two vectors and are said to be left equivalent, denoted by , if there exist two 1-vectors and such that
(33) The left equivalence class of is denoted by
-
(iv)
Two vectors and are said to be right equivalent, denoted by , if there exist two 1-vectors and such that
(34) The right equivalence class of is denoted by
It is easy to verify that the equivalences defined in Definition II.3 are equivalence relations.
Proposition II.4
[11]
- (i)
- (ii)
- (iii)
- (iv)
Definition II.5
-
(i)
is said to be left (right) reducible, if there exist an identity matrix , and an , such that
(39) Otherwise, it is called left (right) irreducible.
-
(ii)
is said to be left (right) reducible, if there exist an 1-vector , and an , such that
(40) Otherwise, it is called left (right) irreducible.
Proposition II.6
[11]
-
(i)
Consider (). There exists a unique left irreducible (correspondingly, right irreducible ), such that
(43) -
(ii)
Consider (). There exists a unique left irreducible (correspondingly, right irreducible ), such that
(46)
Definition II.7
-
(i)
Consider . The left equivalent classes form the left quotient space as
(47) -
(ii)
Consider . The right equivalent classes form the right quotient space as
(48) -
(iii)
Consider . The left equivalent classes form the left quotient space as
(49) -
(iv)
Consider . The right equivalent classes form the right quotient space as
(50)
Remark II.8
In this section we carefully reviewed the left(right) STP, left(right) equivalence, and left(right) quotient space for both matrices and vectors. It is easy to see that the left (STP) system, including the left STP, left equivalence, and left quotient space, is “mirror symmetric” to the right (STP) system, including the right STP, right equivalence, and right quotient space. Up to this paper, all the researches are concentrated mainly on the left system. In the rest of this paper, we are only interested on left system too. We claim that all the results obtained for the left system are also available for the right system. Unless elsewhere stated, the corresponding verifications are straightforward.
Hereafter we assume the default system is the left one. That is,
(51) |
Unless elsewhere stated.
III Signal Space
A signal length can be considered as a vector in Euclidian space [19].
Definition III.1
Two signals and are said to be equivalent if .
Remark III.2
-
(i)
According to Proposition II.4, means there exists a such that and . Hence we have
That is, and are obtained from same signal with different sampling time lengths. This fact makes the physical meaning of equivalence clear.
If we consider and from frequency domain, they are exactly the same.
-
(ii)
Similarly, means there exists a such that and . Assume , then we have
It can be considered as both and are obtained from same signal with different sampling frequencies (with certain approximation).
Hence, from finite signal point of view, the right (or left) equivalence is physically meaningful.
It is clear that under the left equivalence the signal space becomes
And a signal becomes . Let be irreducible, then is called the atom signal, and we define
(52) |
III-A Vector Space Structure on Signal Space
Definition III.3
[11] Let with and , . Then the left (right) semi-tensor addition (STA) of and is defied by
(55) |
The physical meaning of these additions are very clear: Assume is a signal with period and is a signal with period , then the addition of these two signals, , is a signal with period . Similarly, assume is a signal with frequency and is a signal with period , then the addition of these two signals, , is a signal with frequency .
Figure 1 shows that the black signal has period and the red signal has period , then their addition , (blue line) is a signal with period ; their addition , (green line) is a signal with frequency .
Proposition III.4
Using Proposition III.4, the following results are obvious.
Proposition III.5
[11]
-
(i)
is properly defined on by
(57) -
(ii)
Define the scalar product on by
(58) With and scalar product , is a vector space.
III-B Topological Structure on
Definition III.6
Let with and , .
-
(i)
The inner product of and is defined by
(59) where
is the conventional inner product on .
-
(ii)
The norm of is defined by
(60) -
(iii)
The distance of and is defined by
(61)
Using this inner product, the angle between two elements , denoted by , is defined by
(62) |
Using this distance, the distance deduced topology can be obtained, which is denoted by . Apart from this , there is another topology on . Naturally, each can be considered as a component (i.e., a clopen set) of . And within each the conventional Euclidian space topology is used. Then overall, such a topology is called the natural topology, denoted by . is disconnected. Precisely speaking, its fundamental group is .
It is easy to verify the following result.
Proposition III.7
[11] Consider . Let . Then , if and only if, .
Proposition III.8
According to Proposition III.8, the inner product, norm, distance defined by Definition III.6 for can be extended to . Hence the following definitions for are all properly defined.
Definition III.9
-
(i)
The inner product of and is defined by
(64) -
(ii)
The norm of is defined by
(65) -
(iii)
The distance of and is defined by
(66)
With the topology deduced by (66) is a topological space. Because of Proposition (61), this distance deduced topology is homeomorphic to quotient topology. That is,
In the following proposition we summarize some known basic properties of .
Proposition III.10
Since is an inner product space, then the following geometric structure is obvious.
Corollary III.11
-
(i)
For any two points , the angle between them (the same as the angle between and ) , denoted by , is determined by
(67) -
(ii)
and (as well as and ) are said to be parallel (orthogonal), if (correspondingly, ).
Let . The projection of onto , denoted by , is defined by
(68) |
Proposition III.12
[12]
-
(i)
The projection of onto is
(69) where ()
-
(ii)
is perpendicular to , (). That is, the angle between them, denoted by satisfies
(70) -
(iii)
For any , .
(See Figure 2 for the projection.)
Remark III.13
The projection can be extended naturally to a projection between subspaces of .
III-C Linear Mapping on
Consider the set of matrices and the equivalence . Similarly to vectors, we define the reducibility.
Definition III.14
Let . is said to be (left) reducible, if there exists an identity matrix , and a matrix such that
(71) |
Otherwise, it is irreducible.
Using Proposition II.4, one sees easily that in there exists a unique irreducible such that
Next, consider the linear mapping on . We have the following result:
Proposition III.15
[11] Assume with , and with . Then the MV-STP is consistent with both equivalences. That is,
(72) |
Recall the quotient space of matrices, , using Proposition III.15, the following result is obvious.
Corollary III.16
-
(i)
A linear mapping on image space is a mapping , which can be expressed by
(73) -
(ii)
For two linear mappings and ,
(74)
IV Structure of Signal Space
IV-A Orthonormal Basis of Signal Space
First, we search a basis of .
It was claimed by [37] that
(75) |
is a basis of . Unfortunately, is only a generating set of , which was proved by [37]. But there are some linearly depending terms. Say,
Searching a basis of is a fundamental task for understanding and using the signal space.
Denote
is said to be a multi-fold divisor of if .
Note that since , we can gradually choose
and denote
such that the elements of are linearly independent, and
(76) |
That is, is a basis of .
Let . The basis of is obtained.
To describe this process clearly, the following theory shows the relationship between and .
Theorem IV.1
(77) |
if and only if, has no multi-fold divisor.
Proof.
Since
(77) tells us how to choose the elements from , which are linearly independent with elements in .
(Sufficiency:)
Assume has no multi-fold divisor. We have to show that
(78) |
are linearly independent, which implies that .
Let be linearly dependent with . Then there exists a such that it can be expressed by
(79) |
Since
say is irreducible. then . But , hence back to Euclidian space, we should have
(80) |
where , say .
It is obvious that , because the elements in are linearly independent. Say, the -th component of denoted by . Then
where with -th component equals . To meet (80), we need
(81) |
We claim that there exists at least one such that
(82) |
Since has no multi-fold divisor, we have . (Otherwise, say , then has as its multi-fold divisor. )
Then for any as runs from to takes all values from . That is, there exists a such that . That is, . Hence
That is, (81) does not satisfied, and hence (80 is not true. Hance, (78) is not a linearly independent set.
(Necessity) Assume . where . We have to show (78) is a linearly dependent set. We can, w.l.g., assume is a prime number. Then
Note that
Then we have
That is,
which implies that (78) is a linearly dependent set.
From the proof of Theorem IV.1, one sees easily how to choose elements of to form , and hence the basis of , which is .
Corollary IV.2
-
(i)
Assume has no multi-divisor, then
(83) -
(ii)
Assume has multi-divisor. Express , where has no multi-divisor. (Such an expression is unique.) Then
(84)
Example IV.3
-
(i)
Consider . Then and , and
Now we calculate . Hence,
Note that
and
Hence, remove and is reasonable.
-
(ii)
Consider . Then and , and
Calculate . Hence,
To see the rest elements have to be deleted, we have
As goes from to , one sees easily that have to be removed.
Now we are able to construct a basis of . Using Corollary IV.2, the basis elements can be obtained. A few leading elements are listed as follows.
(91) |
Since is an inner product, using Gram-Schmidt orthonormalization algorithm, we can get orthonormal basis as
(98) |
Remark IV.4
-
(i)
The orthonormal basis can easily be obtained up to finite terms via computer numerically.
-
(ii)
Using orthonormal basis, can be imbedded isometrically into Hilbert space [34].
-
(iii)
Inspired that the discussion of this subsection is about , since the elements in the basis are neither left reducible no right reducible, it is easy to see that the basis of is also the basis of .
IV-B Norms of Signal Space
Consider , which is a sequence of real numbers, denote it by , which is a vector space. We can define a set of norms on as
-
•
norm:
(99) Then
(100) -
•
norm:
(101) Then
(102)
Then the following facts are well known [28].
Proposition IV.5
-
(i)
is a Frèchet space ***On a vector space , a mapping is called a pseudo-norm, if – – – A complete pseudo-normed space is called a Frèchet space.
-
(ii)
, are Banach spaces.
-
(iii)
is an Hilbert space.
Now go back to the set of signals. If a signal . Then it is easy to be embedded into as
In this way, , . This is commonly used in signal processing community. But if we consider infinite-dimensional signal or the infinite union of finite dimensional signals [19], we must be very careful.
Note that is a vector space, but it is not a topological space. All are its subspaces. Using the distance deduced by the norm (pseudo-norm), all become topological spaces. The topologies deduced by the distances are denoted by .
Consider . As aforementioned on there are two commonly used topologies: natural topology () and distance deduced topology ().
Consider the natural topology first. Under this topology, any two points , , are separable (under , or Hausdorff sense). Hence,
(103) |
Because each element in has only finite terms.
The mapping , defined by (103) is one-to-one, hence can be considered as an imbedding. And then we can pose the subspace topology of to . For instance, let . Then is an inner product space. But it is not a Hilbert space, because it is not complete.
All spaces
are topological spaces. Some properties follow from the definition immediately.
Proposition IV.6
-
(i)
homeomorphic to neither no .
-
(ii)
If , then does not homeomorphic to .
-
(iv)
Consider , which is considered as the space of signals with fixed length. Assume it has the subspace topology of , then
(104)
Remark IV.7
(104) shows that when the signals have a fixed dimension, all the vector space topologies are equivalent. Only the dimension varying signals or infinity dimensional signals are considered, the different topologies become meaningful.
Next, we consider the distance deduced topology ().
Assume is an orthonormal basis of . Consider . Then
where,
Define as
(105) |
Now consider . It is obvious that
(106) |
Because each element is has only finite terms.
Since the in (106) is one=to-one, then can be considered as an embedding. Hence, we can pose the subspace topology of . For instance, we assume . Then becomes an inner product space. Note that
are all topological spaces. According to the definitions, the following proposition is easily verifiable.
Proposition IV.8
-
(i)
is homeomorphic to neither no .
-
(ii)
If , then is not homeomorphic to .
-
(iv)
Consider with the inherited (subspace) topology of . Then
(107)
Remark IV.9
(107) shows that when signals of fixed dimension are considered, any topologies are the same. Only when the dimension-varying signals are investigated, the different topologies may cause different results.
V Dimension-free STP-CS
According to Corollary III.16, if all signals are expressed as elements in signal space, the left STP-CS can be formally expressed
(108) |
where is the original signal, is the sampled data, and is the sensing matrix.
We call (108) a dimension-free STP-CS, because it can be used to treat signals of arbitrary dimensions.
Correspondingly, if we take as the signal space, then the right STP-CS can be expressed as
(109) |
where is the original signal, is the sampled data, and is the sensing matrix.
Both the expressions (108) and (109) are dimension-free. That is, there is no dimension restriction on the original or compressed signal. But in practical use, dimension depending expression is more convenient. Let us consider a particular (matrix-vector) form for STP-CS: We can w.l.g., assume that is left irreducible. Otherwise, we can use irreducible to replace .
Proposition V.1
Correspondingly, using right system, we have
Proposition V.2
In practical use, the conventional matrix expression is necessary.
-
(i)
Case 1:
Assume with .
This case is commonly assumed in current literature.
We consider the right STP-CS first. Then we have (113).
Proposition V.3
Consider (113). Then
(116) |
Proof. Since
(117) |
Then it is clear that the smallest set of linearly dependent columns can only be found within each . This fact leads to (116).
Using Proposition I.1, we have the following result.
Corollary V.4
Consider (113). If , then for each there is at most one solution .
In fact, we can get a better result for STP-CS.
Let , where , . Define
(118) |
Proposition V.5
Consider (113). If , then for each there is at most one solution .
Proof. Using (117) we can get the following decomposed system as
(119) |
Using Proposition I.1 to each sub-equations, the conclusion follows.
Next, we consider left STP-CS. Then we have (110).
Lemma V.6
[10] Let , . Then
(120) |
Define
It is well known that is obtained from by an element permutation. Then similarly to Proposition V.5, we have the following result.
Proposition V.7
Consider (110). If , then for each there is at most one solution with .
Proof. Using Lemma V.6, we have
(125) |
(125) is equivalent to
(126) |
The conclusion follows from Proposition V.5 immediately.
-
(ii)
Case 2: Assume with .
We conclude that Case 2 can be converted into Case 1. Then the results for Case 1 are available for Case 2.
Definition V.8
Let (or ). Then
(129) |
where is left irreducible ( is right irreducible) .
Next, we consider some further properties.
Definition V.9
(Coherence)
Assume , where is left irreducible (or , where is right irreducible). Then the coherence of is defined as
(130) |
Finally, we consider the RIP. Denote by
(131) |
Then we can define dimension-free RIP as follows.
Definition V.10
with is said to satisfy the -RIP, if there exists , such that
(132) |
Remark V.11
-
(i)
It is easy to verify that for any , the coherence of is the same. Hence the Definition V.9 is reasonable. In fact, can be replaced by any .
-
(ii)
The inner product and norm can also be replaced by and . which are related to the topological structure of .
-
(iii)
Definition V.9 seems more restrictive than the original definition for fixed dimensional case. In fact, it is not. Note that if and , then
Hence if satisfies the -RIP as defined in Definition I.2, then satisfies the -RIP as defined in Definition V.10.Hence, the fixed dimension -RIP implies the dimension-free -RIP, which ensures the precise reconstruction for much more signals.
VI BIBD-Based Sensing Matrix
VI-A BIBD-Based Construction
This subsection gives a briefly review for the construction of sensing matrix based on balanced incomplete block design (BIBD), proposed in [25].
Definition VI.1
Consider a matrix .
-
(i)
is called a sign matrix if
-
(ii)
is called a Boolean matrix if
The column degree of is denoted by
The row degree of is denoted by
Definition VI.2
[25] Let and , i.e., each block , .
-
(i)
is defined by
is called an incidence matrix.
-
(ii)
is said to have BIBD, if its index matrix satisfies the following conditions.
-
–
Each element appears exactly in blocks.
-
–
Each block contains exactly elements, i.e., , .
-
–
Each pair of elements in X appears exactly in blocks.
Precisely, is said to have a -BIBD.
-
–
Assume and . [25] proposed a way to construct a deterministic sensing matrix as follows.
Assume has -BIBD, then its incidence matrix can be expressed as
(133) |
The sensing matrix is constructed through two expansions.
-
•
Vertical Expansion:
Algorithm VI.3
Assume an incidence matrix is given.
-
•
Step 1: Keep first column unchanged. Start from second column. Keep the first in column 2 unchanged. If the second meets at the same row of the first column, keep this unchanged. Otherwise, move all remaining elements (including this ) down, till its same row element in first column being . Then do the same thing for third and keep doing for all other elements one by one in the order.
-
•
Step 2: For third column. Allow it has one , which is on the same row of the for first and second columns. Otherwise, move this and all the below elements down. Until the above requirement satisfies, which keeps the inner product of the first or second column with third one is .
-
•
Step k-1: For -th column, similarly to third column, as long as the inner product of -th column with one of the first columns is greater than , move the corresponding with all below elements down, until the inner product requirement is satisfied.
We give a simple example to depict this algorithm.
Example VI.4
-
•
Horizontal Expansion:
Definition VI.5
Let be a sign matrix, and be a nonsingular diagonal matrix, say , where , and , . Then is called an embedding matrix.
Algorithm VI.6
Assume the vertically expanded incident matrix and an embedding matrix are given.
For each column of , say, , its first is replaced by , second is replaced by , , till last is replaced by . is replaced by . The resulting matrix is expressed as
(136) |
The following result and the estimation (6) are fundamental for this design.
Theorem VI.7
Corollary VI.8
Using BIBD-based design, the best solution is
(138) |
which can be reached as long as
(139) |
Example VI.9
Recall Example VI.4. It is easy to find an embedding matrix as
(140) |
. Then the sensing matrix can be constructed as
(158) |
.
It is easy to verify that
Hence,
VI-B A modified BIBD-Based Sensing Matrix
This subsection aims on improving the BIBD-based sensing matrix.
First, we improve the vertical expansion.
Definition VI.10
Let be an incident matrix with column degree . A new vertical expansion, denoted by , is defined as follows.
(159) |
where
Example VI.11
Recall the in Example VI.4. Then
The following proposition comes from the construction immediately.
Proposition VI.12
Let be an incident matrix with column degree . Then the vertically expanded matrix satisfies the following conditions.
-
(i)
, where
(160) -
(ii)
(163) -
(iii)
The coherence
Comparing with , it is obvious that has the same column degree and the same coherence with . Moreover, has less rows () comparing with (). The smaller the is, the higher the compress rate is obtained.
Next, we consider how to construct an embedding matrix to meet (139).
Lemma VI.13
Assume is an embedding matrix, where is a sign matrix, and is a nonsingular diagonal matrix with , and , . Then
(164) |
Proof. Denote .
The conclusion follows.
It is clear now to construct an embedding matrix, we need only to construct a sign matrix, satisfied (139).
Proposition VI.14
Proof.
- (i)
- (ii)
Definition VI.15
-
(i)
A sign matrix satisfying (165) is called an orthogonal column matrix (OCM).
-
(ii)
A sign matrix satisfying (166) is called an almost orthogonal column matrix (AOCM).
-
(iii)
A sign matrix is called a largest OCM (LCOM), if it is an OCM with maximum number of columns.
-
(iv)
A sign matrix is called a largest AOCM(LACOM), if it is an AOCM with maximum number of columns.
VI-C Constructing and
Assume the vertically expanded matrix is obtained as in (159). Then we consider how to construct (or ), where
where
And we wish to be as large as possible.
Lemma VI.16
Assume is an OCM (or AOCM).
-
(i)
Replacing by , the resulting matrix is still an OCM (or AOCM).
-
(ii)
Replacing by , the resulting matrix is still an OCM(or AOCM).
-
(iii)
Doing a row (or column) permutation, the resulting matrix is still an OCM(or AOCM).
Proof. A simple computation shows that all the aforementioned operations do not change the coherence . The conclusion follows.
In this subsection, denote by , if is obtained from by the transformations defined in Lemma VI.16.
Assume is a set of -dimensional orthogonal vectors. Using Lemma VI.16, we can, w.l.g., assume
Assume is even, say , where . then, w.l.g., we can assume
Now assume
Since is orthogonal with , we have
Since is orthogonal with , we have
We conclude that
(167) |
(167) implies the following two facts:
-
(i)
If there exists , then is an even number, and denoted by .
-
(ii)
Hence, we can exactly have two more elements for , which are
Continuing this procedure, the following result about can be obtained.
As for , since we are not able to find a sign vector such that the coherence of and is . . We, therefore, have the following result.
Theorem VI.17
Assume , where is an odd number.
-
(i)
can be obtained as
(168) where
Moreover, is unique up to column (or row) sign changes and column (or row) permutations.
-
(ii)
(169)
We conclude that
Corollary VI.18
Consider even . When , the largest ratio can be obtained as
(170) |
Next, we consider odd case, i.e., set . Then . We construct .
Motivated by the case of even , we may choose .
Proposition VI.19
Consider odd . When , the ratio can be obtained as
(171) |
which is slightly larger than .
Proof. Consider . Deleting the first row, the remainder becomes a .
.
Proposition VI.20
Proposition VI.21
The generated from is a maximum almost orthogonal matrix.
Proof. Assume
is generated from by delete its first row. Then
Hence,
Note that
then
(174) |
We use contradiction to prove is the maximum almost orthogonal matrix. To this end, assume is a sign vector, satisfying , . i.e. .
According to Lemma LABEL:lcs.3.3.1, we can assume
Then satisfies (174). First,
is impossible. Otherwise,
This contradicts the structure of maximum . Hence, there is at least one , , such that
(175) |
Since satisfies (174), we have
also satisfies (174), hence
(176) |
According to (175), we have
That is
(177) |
this is a contradiction because and are integers.
Example VI.22
-
(i)
Consider . Note that
Then we have
-
(ii)
Consider . Using , we have
Remark VI.23
-
(i)
If , we may one arbitrary row to to get a . But this one might not be the one with maximum ratio . For instance, we have
-
(ii)
Unlike even case, so far we don’t know how to construct for general odd .
-
(iii)
Our conjecture is (171) is the best ratio for odd .
- (iv)
VII Conclusion
The purpose of this paper is to reveal some mathematical foundations for the STP-CS. First, the signal space is presented. The STP based equivalence leads to a quotient space, which is the signal space. Second, a coordinate-free STP-CS is presented. It is also revealed that STP-CS has an advantage in . Third, a systematic construction of BIBD-based sensing matrix is obtained.
References
- [1] A. Amini, F. Marvasti, Deterministic construction of binary, bipolar, and ternary compressed sensing matrices. IEEE Trans. Inform. Theory, Vol. 57, No. 4, 2360– 2370, 2011.
- [2] A. Amini, V. Montazerhodjat, F. Marvasti, Matrices with small coherence using p-ary block codes, IEEE Sign. Proc., 60, 1 , 172-181, 2012.
- [3] T.T. Cai, A. Zhang, Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inform. Theory, Vol. 60, No. 1, 122–132, 2014.
- [4] E.J. Candes, J. Romberg, T. Tao, Robust uncertainty principles: signal reconstruction from highly incomplete frequencyinformation, IEEE Trans. Inf. Theory, Vol. 52, No. 2, 489-509, 2006.
- [5] E. Candes, J.K. Romberg, T. Tao, Stable signal recovery from incomplete and inaccurate measurements, Commun. Pure Appl. Math., Vol. 59, No. 8, 1207-1223, 2006.
- [6] E. Candes, J.K. Romberg, Sparsity and incoherence in compressive sampling, Inverse Prob., Vol. 23, No. 3, 969-985, 2007.
- [7] X. Chai, J. Fu, Z. Gan, et al., Exploiting semi-tensor product compressed sensing and hybrid cloud for secure medical image trasmissian, IEEE Int. Thing. J., Vol. 10, No. 8, 7380-7392, 2023.
- [8] D. Cheng, Matrix and Polynomial Approach to Dynamic Control Systems,
- [9] D. Cheng, H. Qi, Z. Li, Analysis and Control of Boolean Networks - A Semi-tensor Product Approach, Springer, London, 2011.
- [10] D. Cheng, H. Qi, Y. Zhao, An Introduction to Semi-tensor Product of Matrices and Its Applications, World Scientific, Singapore, 2012.
- [11] D. Cheng, On equivalence of Matrices, Asian J. Math., Vol. 23, No. 2, 257-348, 2019.
- [12] D. Cheng, From Dimension-Free Matrix Theory to Cross-Dimensional Dynamic Systems, Elsevier, London, 2019.
- [13] D. Cheng, Z. Ji, From dimension-free manofolds to dimension-varying control systems, Commun. Inform. Sys., Vol. 23, No. 1, 85-150, 2023.
- [14] D. Cheng, Z. Ji, Finite and Dimension-free dynamic systems, Science Press, Beijing, 2023 (in Chinese).
- [15] N. Cleju, Optimized projections for compressed sensing via rank-constrained nearest correlation matrix, Appl. Comput. Harmon. Anal., Vol. 36, No. 3, 495–507, 2014.
- [16] T.T. Do, L. Gan, N.H. Nguyen, et al., Fast and efficient compressive sensing using structurally random matrices, IEEE Trans. Signal Proc., Vol. 60, No. 1, 139–154, 2012.
- [17] D.L. Donoho, M. Elad, Optimally sparse representation in general (nonorthogonal) dictionaries via minimization, Proc. Natl. Acad. Sci., Vol. 100, No. 5, 2197-2202, 2003.
- [18] D.L. Donoho, Compressed sensing, IEEE Trans. Inf. Theory, Vol. 52, No. 4, 1289-1306, 2006.
- [19] M.F. Duarte, Y.C. Eldar, Structured compressed sensing: From theory to applications, IEEE Trans. Sign. Proc., Vol. 59, No. 9, 4053-4085, 2011.
- [20] M.F. Duarte, R.G. Baraniuk, Kronecker compressive sensing. IEEE Trans. Image Proc., Vol. 21, No. 2, 494–504, 2012.
- [21] R.W. Hamming, Digital Filters, Prentice-Hall, New Jersey, 1977.
- [22] J. Hou, X. Liu, Semi-tensor product-based one-bit compressed sensing. Eurasip J. Adv. Sign. Proc., 2023(1): 112.
- [23] A. Joshi, A. Yerudkar, C. D. Vecchio, L. Glielmo, Storage Constrained Smart Meter Sensing using Semi-Tensor Product, Proc. 2019 IEEE Int. Conf. Sys. Man Cyb., Bari, Italy, 51-56, 2019.
- [24] J.L. Kelley, I. Namioka, Linear Topological Spaces, Springer-Verlag, New York, 1963.
- [25] J. Liang, H. Peng, L. Li, F. Tong, Construction of structured random measurement matrices in semi-tensor product compressed sensing based on combinatorial designs. Sensors, Vol. 22, No. 21, 8260, 2022.
- [26] K. Lee, Y. Wu, Y. Bresler, Near optimal compressed sensing of sparse rankone matrices via sparse power factorization. Comput. Scie., Vol. 92, No. 4, 621– 624, 2013.
- [27] H. Peng, Y. Mi, L. Li, H.E. Stanley, and Y. Yang, P-tensor product in compressed sensing, IEEE Intern. Thing. J., Vol. 6, No. 3, 3492-3510, 2-19.
- [28] A.E. Taylor, D.C. Lay, Introduction to Functional Analysis, 2nd Ed., John Wiley & Sons, Inc., New York, 1980.
- [29] F. Tong, L. Li, J. Pemg, Y. Yang, Flexible construction of compressed sensing matrices with low storage space and low coherence, Signal Processing, 182 (2021) 107951.
- [30] J. Wang, S. Ye, R. Yue, C. Chen, Lower storage space for compressive sensing: Semi-tensor product approach, EURASIP J. Image Video Proc., Vol. 2017, No. 51, DOI. 10.1186/s-13640-017-0199-9, 2017.
- [31] W. Wen, Y. Hong, Y. Fang, Me. Li, Mi. Li, A visually secure image encryption scheme based on semi-tensor product compressed sensing, Signal Processing, Vol. 173, 107580, 2020.
- [32] D. Xie, H. Peng, L. Li, Y. Yang, Semi-tensor compressed sensing, Dig. Sign. Proc., Vol. 58, 85-92, 2016.
- [33] Y. Xu, W. Yin, and S. Osher. Learning circulant sensing kernels, Inv. Prob. Imag., Vol/ 8, No. 3, 901-923, 2014.
- [34] S. Xue, L. Zhang, Z. Xie, W. Yan, K. Zhang, Embedding cross-dimensional vector space into , J. Syst. Sci. Compl., Vol. 36, 2309-2324, 2023.
- [35] G. Ye, M. Liu, W. Yap, B. Goi, Reversible image hiding algorithm based on compressive sensing and deep learning. Nonlinear Dynamics, Vol. 111, No. 14, 13535-13560.
- [36] H. Yuan, H. Song, X. Sun, et al., Compressive sensing measurement matrix construction based on improved size compatible array LDPC code. IET Image Proc., Vol. 9, No. 11, 993–1001, 2015.
- [37] , K. Zhang, K.H. Johansson, Long-term behavior of cross-dimensional linear dynamical systems, Proc. CCC 2018, 158-163, 2018.
- [38] Q. Zhang, B. Wang, J. Feng, Solution and stability of continuous-time cross-dimensional linear systems, Fron. Inform. Tech., Vol. 22, No. 2, 210-221, 2021.