paratree和susvd分解技术及其在阵列信号处理中的应用研究(IJIGSP-V5-N11-4)

更新时间:2023-06-08 12:19:58 阅读: 评论:0

I.J. Image, Graphics and Signal Processing, 2013, 11, 35-45
Published Online September 2013 in MECS (s-press/)叶儿粑
黄草岭DOI: 10.5815/ijigsp.2013.11.04
A Survey on PARATREE and SUSVD
Decomposition Techniques and Their U in
Array Signal Processing
Vineet Bhatt
Department of Mathematics, HNB Garhwal University, Campus Badshahi Thaul, Tehri Garhwal-249199, Uttarakhand,
India
vineet.
Sandeep Kumar*
Department of Mathematics, Govt. P.G. College, New Tehri, Tehri Garhwal, Pin: 249 001, Uttarakhand, India
Abstract —The prent manuscript is intended to review few applications of tensor decomposition model in array signal processing. Tensor decomposition models like HOSVD, SVD and PARAFAC are uful in signal processing. In this paper we shall u higher order tensor decomposition in signal processing. Also, a novel orthogonal non-iterative tensor decomposition technique (SUSVD), which is scalable to arbitrary high dimensional tensor, has been applied in MIMO channel estimation. The SUSVD provides a tensor model with hierarchical tree structure between the factors in different dimensions. We shall u a new model known as PARATREE, which is related to PARAFAC tensor models. The PARAFAC and PARATREE both describe a tensor as a sum of rank-1 tensors, but PAR
ATREE has veral advantages over PARAFAC, when it is applied as a lower rank approximation technique. PARATREE is orthogonal, fast and reliable to compute, and the order of the decomposition can be adaptively adjusted. The low rank PARATREE approximation has been applied to measure noi suppression for tensor valued MIMO channel sounding measurements.
Index Terms —MIMO, SVD, PARATREE, SUSVD, tensor decompositions, signal processing
I.I NTRODUCTION
A higher order tensor is any N-dimensional collection of data. It is generally known as tensor or a multidimensional array. Tensor decompositions and factorizations were initiated by Hitchcock in 1927[1], [2] and later developed by Cattell in 1944 [3] and by Tucker in 1966[4]. Tensor factorizations or decompositions play a fundamental role in enhancing the data and extracting latent components. In the era, tensor is ud in a wide variety of applications such as in signal processing [5], data mining [6], neuroscience [7], and many more. In various signal processing applications, instrumental data contains information in more than two dimensions. Recently, rearchers have contributed a large amount of rearch regarding veral application areas of well-established matrix operations up to their tensor equivalents. Unfortunately, the extensions from their matrix counterparts are not
trivial. For example the SVD has proven to be a powerful tool for analyzing matrix or 2nd-order tensors, its generalization to higher order tensors is not straightforward. There are veral approaches for doing this, and none of them is superior in all aspects. Basically there are three fundamental approaches to decompo a higher order tensor, first one is Tucker model (multi-linear SVD or HOSVD) [3], cond is CANDECOMP/PARAFAC [8] and the third approach is non- negative tensor factorization. Both the CP and Tucker tensor decomposition can be considered as higher-order generalization of the matrix SVD and PCA respectively. If any tensor decompos into sum of rank-1 tensor, this type of decomposition is often called “Canonical Decomposition” (CANDECOMP) or “Parallel Factors” model (PARAFAC) [8]. It has been applied in many signal processing applications, such as image recognition, acoustics, wireless channel estimation [9] and array signal processing [10], [11]. Recently, a Tucker-model bad HOSVD [12] tensor decomposition subspace technique has also been formulated to improve multidimensional harmonic retrieval problems [13]. In this paper, we are attempting to pursue the contribution of higher order tensor decomposition in signal processing. In signal processing, data obtained from MIMO channel sounding measurements is a good example of tensor-valued data. It is well known that a PARATREE model is an enhanced version of PARAFAC model. Also the PARATREE tensor model is uful in signal processing; it is applied to suppress measurement noi in multidimensional MIMO radio channel measurements. Th
is is Performed by identifying the PARATREE components spanning the noi subspace, and removing their contribution from the channel obrvation. Therefore in subction 3 of ction II, a novel PARATREE tensor model has been introduced, which is accompanied with SUSVD algorithm. As the rank-1
36  A Survey on PARATREE and SUSVD Decomposition Techniques and Their U in Array Signal Processing
tensor decomposition technique is suitable for veral tensor decomposition models therefore by additionally imposing this technique, the PARATREE model can be efficiently applied to approximate higher-order tensor. One example of such application involves interpreting the vector of eigenvalues of a large covariance matrix as a tensor, which is then ud in a linear algebraic expression to finding the FIM. Approximating this tensor using PARATREE decomposition allows a significant reduction in computational complexity over a straight-forward matrix multiplication or any other exact solution. PARATREE also achieves a significant complexity reduction against HOSVD and PARAFAC. However, the u of PARATREE in practice is far more convenient than PARAFAC since PARATREE is capable of decomposition n-way array into a quence of 1-rank tensor and hence SUSVD does work even when the convergence problem by PARATREE aris. Also the order of the
PARATREE decomposition can be easily controlled, and the corresponding approximation error is well defined.
This manuscript is organized as follows: In Section II, the uful tensor models and tensor operations are described which form the general framework for the prent study. We have also described three important tensor models viz. Tucker, PARAFAC and PARATREE. Also, the methods for computing tensor decompositions like ALS, HOSVD and SUSVD are discusd in brief. Rank approximation, deflating the full SUSVD and Deflating the full HOSVD is also described herein. Further, in Section III, we have delineated the MIMO propagation channel modelling, MIMO system channel and signal model, den multipath component, FIM-key quantity of parameter estimation, noi suppression of multidimensional radio channel measurements and have pursued some of the relevant applications. At the end, conclusion over the propod study has been outlined. II.U SEFUL TENSORS MODELS AND TENSOR
OPERATIONS
Some tensor models and tensor operations are uful for signal PARAFAC model, which is commonly ud for signal modelling and estimation purpos, and the orthogonal models s
uch as HOSVD which better suit for tensor approximation, data compression, and filtering applications. In this manuscript we are going to apply tensor models to MIMO channel modelling. As a preliminary, first we describe general tensor operation and models as below: A. General tensor operation
Here we introduce operations for N-mode tensors, the term N-mode or N-way tensors can be ud to describe any N-dimensional data structure. An N-dimensional data structure is known as higher order tensor. Some basic operations for an N-dimensional tensor AA∈ℂI1×I2×……×I n×…..×I N are defined as below:
1. The N-mode matrix unfolding of a higher order tensor
The N-mode matrix unfolding AA n of a tensor AA compris of:
1.) Permutation of tensor dimensions into an order {n,n+1,…..N,1,…..,n+1} and
2.) Reshaping the permuted tensor into a matrix A n such that A n∈ℂI n×∏I i i≠n, i.e.,
AA n=reshape[permute {AA,(n,n+1,….,N,1,…..,n−1)},{I n,∏I i i≠n}] (1)
The order in which the columns of the matrix after unfolding are chon in the latter step is not important, as long as, the order is known and remains constant throughout the calculations. A more general treatment of the unfolding (or matricization), including nesting of veral modes in the matrix rows, is given in [14].
2. The n-mode multiplication
The n-mode multiplication AA×n UU∈ℂI1×….×R n×….×I N of a tensor AA and a matrix UU∈ℂR n×I N is defined as AA×n UU=Permute {AA BB,(n,n+1,……N,1,….,n−1)}(2) Where飞行记录仪
AA BB=Reshape{UUAA n,(R n,I n+1,……,I1,I N,…..,I n−1)}(3)
3. Rank-1 tensors
A tensor AA is of rank-1, if it can be expresd as an outer product of N vectors [15]. For instance the tensor AA in terms of outer product of N vectors can be expresd as:
AA=aa(1)οaa(2)ο……οaa N.                                                (4) The element of AA are defined as
αi1i2……i N=∏(aa n)i n
N n=1=a i11.a i22……a i N N                        (5)
B.Tensor decomposition  models
As for as the MIMO study concerns, there are two important tensor decomposition data models often appeared in multi-antenna (MIMO) communications, one is Tucker model and other is PARAFAC.
1. Tucker Model
Tucker models (HOSVD) are the models that decompo the higher order tensors [4], [16]. In this model the key idea is to form a limited t of basis vectors for each mode, and express the tensor as a linear combination of the outer product of the basis vectors of different modes. A higher order tensor AA can be decompo by Tucker model as田野造句
AA=CC×1UU1×2UU2×3………..×N UU N            (6) where CC∈ℂR1×…..×R N is called the core tensor and the matrices UU n I n×R n contain the basis vectors. Tucker
model for third order tensor can be obtained by putting N = 3 in (6) and can be shown as in Fig.1.
Figure 1 Tucker Model for N = 3
2. PARAFAC Model
The PARAFAC model or decomposition of a tensor is a sum of Rank-1 tensors. There are a number of ways to express PARAFAC decomposition [16]. Let us consider an N-mode tensor AA ∈ℂI 1×I 2…..×I N  and N matrices AA n ∈ℂI n ×R , where R is the number of factors equal to the rank of the tensor. Then the matrices AA n ,n ∈[1,2,…..,N], with columns aa r n , r ∈[1,2,….,R], can be formed such that the tensor AA  is the sum of outer products
AA = ∑aa r (1)οaa r (2)ο……οaa r (N)
R r=1,              (7)
where each outer product of the vectors aa r N  is a rank-1 tensor. Equivalently, the PARAFAC model can be expresd element wi as
αi 1,i 2,…..,i N =∑a r,i 11.a r,i 22……a r,i N N
R r=1              (8)
where i i  denotes the index in i th mode. A vectorized definition is given by
VVVVVV (AA )=(AA N ⊙AA N −1⊙……⊙AA 1)1R =∑aa r N
⊗R r=1aa r N −1⊗…..⊗aa r 1,
(9)
where 11R  is a column vector of R ones. For third order tensor (N = 3), PARAFAC model is visualized by Fig.2, where the relations from (7) to (9) are given by aa rr 11=aa rr , aa rr 22=bb rr  and aa rr 33
=VV rr .
Figure 2 PARAFAC Decomposition (a sum of R rank-1
tensors)
3. PARATREE Model
In the prent context a novel tensor model (PARATREE), which belongs to PARAFAC model, has been introduced. This new model has distinct hierarchical tree structures. PARATREE model can be
efficiently applied to approximate higher order tensors [28]. One example of such application involves interpreting the vector of eigenvalues of large covariance matrix as a tensor, which is then ud in a linear algebraic expression for finding FIM.
Approximating this tensor using PARATREE
decomposition allows    a significant reduction in computational complexity over a straight-forward matrix multiplication or any other exact solution. PARATREE also achieves a significant complexity reduction against HOSVD and PARAFAC. However, the u of PARATREE in practice is far more convenient than PARAFAC, since the SUSVD does not suffer from convergence problems. Also the order of the PARATREE decomposition can be easily controlled, and the corresponding approximation error is well defined [28]. In a cond novel application the PARATREE model is applied to suppress measurement noi in multidimensional MIMO radio channel measurements [28]. This is performed by identifying the PARATREE components spanning the noi subspace, and removing their contribution from the channel obrvation. To conclude the above mentioned properties the PARATREE model, we have following benefits of this model,
1. It is reduced computational complexity in high
dimensional inver problems.
2. It is beneficial to measurement noi
suppression or subspace filtering.
3. It is beneficial to compression of data or
similar to low rank matrix approximation.
4. It is beneficial for fast and reliable computation
and adaptive order (rank) lection.
5. It is beneficial to revealing of hidden structures
事业支出and dependencies of data.
We can also think of a PARATREE model as a novel hierarchical formulation for PARAFAC-type model having not only different number of factors in different modes as in block-PARAFAC [18] or PARALIND [17], but additionally the number of factors in each mode can vary for each branch in the
hierarchical tree structure. The PARATREE model for a higher order tensor can be expresd as the sum of outer products as follows
AA =
∑aa r 11ο�∑aa r 1,r 22οR r 2=1…………..ο∑(aa r 1……r N −2.r N −1N −1οaa r 1,N R N −1r N −1=1�
R 1r 1=1.                  (10)
The vector aa r 1,….,r n n in (10) denotes the
r n th  column of the n th  mode matrix of basis vectors AA r 1,…..,r n −1n
. The subscript r 1,r 2…….,r n −1 indicates the dependency of the matrices on the indices of the previous factors of that branch in  tree. Also the number of factors R n withineach mode n can vary over different branches, i.e. R n in (10) is actually a shorthand notation for R r 1,…..,r n −1n .
The visualization of PARATREE model for third way
tensor in Fig: (3).
Figure 3 Three-way PARATREE decomposition, a
hierarchical sum of R rank-1 tensors
Let us obrve the PARAFAC model in Fig.2, the r a
th
basis vector aa r a  in the first mode may be common for veral factors in the remaining modes. To clarify the illustration of Fig. 3, we simply (10) for N=3 as follows
AA =∑aa r a οR a r a =1∑(bb r a ,r b οVV r a ,r b )R
b r b =1            (11)
where the relation to (10) is obtained by tting
�aa r a ,bb r a ,r b ,VV r a ,r b �≡�aa r 11,bb r 1,r 22,VV r 1,r 23
�.It is remarkable that the number of factor R b =R r a b  for the cond and third mode (vector bb r a ,r b  and VV r a ,r b ) may depend on the factor index r a  of the first mode. In addition, the numbers of factors in the last two modes are equal. For N=2, the PARATREE reverts to the regular matrix SVD model. It is noteworthy that one can connect various tensor decomposition models using the well-known established connection [16]. For instance a PARAFAC model can be written as a Tucker model with a super diagonal core tensor [16]. On the other hand a Tucker model can be written as a PARAFAC model (with R equal to the number of elements in the core tensor) [16]. Hence, it is straightforward to write the PARATREE model in terms of PARAFAC or Tucker models as well. A general framework unifying the different decompositions has been recently introduced in [19].
C. Methods for computing the tensor decompositions 1. ALS Method
ALS method is the most common algorithm for fitting a PARAFAC model [21]. The basic idea is to have the number of factors R fixed and obtain an update of the n th  mode basis vector AA n  as  AA
�ALS n =XX n .�(AA N ⊙…⊙AA n+1⊙AA n −1⊙….⊙AA 1)†�t ,(12)带金字的字
while keeping the basis vectors of the outer modes fixed. An iterative update of the matrices AA (n) is obtained by altering n ∈[1,2,……,N]until a convergence is reached. The improvement of fit is monotonic. However, depending on the initial values for the matrices AA (n), a local optimum may be reached instead of the global one or the convergence may be very slow. Therefore, AA (n)are typically initialized by either using multiple random initial values, or so called rational start (bad on either generalized rank annihilation or DTLD)), or a mi-rational start (bad on SVD/EVD) [20].
2. Higher order singular value decomposition (HOSVD)
The HOSVD is obtained by computing the matrix SVD for each 1-mode unfolding of the tensor AA  and lecting the left singular vectors as the orthonormal basis of each mode, respectively. For the complete HOSVD, the basis matrices UU n ∈ℂI n ×R n  are hence given by the first R n =rank(XX n ) left-hand singular vectors of the SVD of XX n , defined as
XX n =UU (n)∑ (n)VV (n)H
.            (13)
Having computed the matrices UU (n), n ∈[1,2,……,N] the core tensor CC ∈ℂR 1×R 2×…..×R n  is given in clod form as我想听儿歌
CC =AA ×1UU (1)H ×2UU (2)H ×3…….×N UU (N )H怀孕初期最忌讳吃什么
(14)
3. Sequential Unfolding Singular Value
Decomposition (SUSVD)
Let us describe the SUSVD algorithm for estimating the PARATREE tensor model [28]. It is a computational method for obtaining an orthogonal PARATREE model. It is bad on the idea of quentially applying the matrix SVD [20], on an unfolding tensor formed from each of the right singular vectors of the SVD in the previous mode. SUSVD can be applied for any N-dimensional real or complex tensor and for N = 2, it is equal to the matrix SVD. The SUSVD decomposition for an N-way tensor AA I 1×I 2×….×I N (I 1≥I 2≥⋯≥I N )is described by the following algorithm: Algorithm 1[28]
[{SS },{UU }] = SUSVD {AA }
Set TT 01
= AA  Set R 0= 1
For each n ={1,2,…..,N −1}: For each r n −1={1,2,…..,R n −1}:
1. Unfold the tensor
TT r 1,r 2,…..,r n −1n =�TT r 1,r 2,…….,r n −1n
(1)
,
2. Compute the SVD TT r 1,r 2,…..,r n −1n =UU n ∑ n
VV n H
, 3. For each r n ∈{1,2,…….,R n }, with ,R n =
rank(TT r 1,r 2,….,r n −1n )
(a) Store σr 1,r 2,……,r n −1n =(∑ n
)r n r n  in {SS }
and UU r 1,r 2,….,r n
n
=(UU n )r n  in {UU }, (b) Then, if n <NN −1,
-Reshape �VV (n )∗�r n
in to a tensor TT r 1,r 2,……,r n n+1
∈ ℂI n +1×……×I N , or el,  -Store the vector
uu r 1,r 2,…….,r N −1N =�VV
(N −1)∗�r N −1
.
The SUSVD method and its reconstruction are visualized for a (2×2×2)
-third order tensor in fig.4.
Figure 4 The SUSVD decomposition for an arbitrary (2×2×2)-tensor
The core idea of the algorithm is to apply the matrix SVD on the 1-mode matrix unfolding of the tensor (e Definition 1) to form the basis vectors of the first mode. Then each of the conjugated right-hand singular vectors vv r 1
1∗
is reshaped into tensors, and the matrix SVD is applied on the 1-mode unfolding of the tensors. This is repeated to construct the PARATREE model, until there are only the elements of the last mod
e contained in the right-hand singular vectors. Note that for a full SUSVD (all possible factors included) described in algorithm 1, the number of basis vectors within each mode is the same for all branches and is given by
R n =min ⁡{M n ,∏M j N −1
j=n+1}.              (15)
Hence, the total number of orthogonal components in the decomposition is given by
R = ∏R n N −1
n=1              (16)
The (2×2×2)-third order tensor in fig. 4 can be reconstructed with the PARATREE model as
AA = ∑σr 11.uu r 11R 1r 1=1ο∑σr 1,r 22.uu r 1,r 22οuu r 1,r 23
R 2r 2=1            (17)
For avoiding the confusion in Fig.4 of the SUSVD decomposition for (2×2×2) tensor. Different colors
shows to different dimensions of the tensor. A square σ denotes a singular value, dashed blocks are elements of the tensors, and solid lines are ud to parate the columns vectors. The tensor is first unfolded to a
matrix TT 0(1)
. After applying SVD on this matrix, each of the right-hand singular vectors is reshaped and another SVD is applied on them. The procedure is repeated for each “branch” and “sub-branch”, until no additional dimensions remain in the right hand basis vectors, i.e., the matrix VV N −1has only I N  rows.The full (R 1=2,R 2=2) reconstruction is given in Fig.5. In this fig.5 a PARATREE tensor is reconstructed as a sum of outer product of weighted (by σr 11σr 1,r 2,) basis vectors uu r 11  ,uu r 1,r 22and uu r 1,r 23
. The tree structure allows common basis vectors in the previous dimensions.
Figure 5 A PARATREE tensor is reconstructed as a sum of
outer product of weighted unitary basis vectors uu rr 1111,uu rr 11,rr 2222
and
uu rr 11,rr 2233.
The relation of the values in (17) to the ones in the 3D-PARATREE formulation (11) or the general form (10) is given by
aa r a =aa r 11=σr 11uu r 11
bb r a ,r b =aa r 1,r 22=σr 1,r 2uu r 1,r 22
VV r a ,r b =aa r 1,r 23=uu r 1,r 23
.
Note that the basis vectors of the SUSVD are exactly the same for the first mode as tho of the HOSVD. However, the number of basis vectors of the latter modes is limited to R n =rank(X n ) for HOSVD, whereas in SUSVD the basis is formed independently for each branch. The result is that the total number R of individual rank-1 contributions, (e.g., as if the decomposition would be written in PARAFAC) from (7), is typically much less for the SUSVD than for the HOSVD. Another difference between the two decompositions is the fact that the HOSVD is unique, whereas for the SUSVD the solution depends on the order of the modes.
4. Reduced Rank Approximations
The individual rank-1 contributions of the HOSVD and SUSVD are orthogonal to each other. The practical implication of this property is that for a reduced rank approximation  AA A of a tensor AA , the squared magnitudes of individual terms directly contribute to the squared magnitude of the approximated tensor. Hence, the squared Frobenius norm of the tensor approximation is given for the SUSVD by
�AA A,SU �F 22=∑�a r AA 1οa r AA 2ο...οa r AA N �F 2r AA =∑σr σr AA N −1
,r A (18)
where σr AA n denotes the
n th  mode singular value and r AA  denotes an index of a rank-1 component included in the reduced rank decomposition. Equivalently, the squared Frobenius norm of the HOSVD approximation is given by
�AA A,HO �F 2
=∑�(CC )r A �2r A              (19)
where (CC )r A  denotes an element of the HOSVD core tensor, and index r A  denotes the indexes contributing to the approximation.
5. Deflating the full SUSVD

本文发布于:2023-06-08 12:19:58,感谢您对本站的认可!

本文链接:https://www.wtabcd.cn/fanwen/fan/82/902462.html

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。

标签:飞行   支出   怀孕   记录仪   事业   忌讳
相关文章
留言与评论(共有 0 条评论)
   
验证码:
推荐文章
排行榜
Copyright ©2019-2022 Comsenz Inc.Powered by © 专利检索| 网站地图