<< Chapter < Page | Chapter >> Page > |
In general
Therefore, the most efficient factorization of ${\otimes}_{i}{A}_{i}$ is described by the permutation $g(\xb7)$ that minimizes $\mathcal{C}$ .
It turns out that for the Kronecker product of more than two matrices, the ordering of operations that describesthe most efficient factorization of ${\otimes}_{i}{A}_{i}$ also depends only on the ratios $({r}_{i}-{c}_{i})/{\mathcal{Q}}_{i}$ . To show that this is the case, suppose $u(\xb7)$ is the permutation that minimizes $\mathcal{C}$ , then $u(\xb7)$ has the property that
for $k=1,\cdots ,n-1$ . To support this, note that since $u(\xb7)$ is the permutation that minimizes $\mathcal{C}$ , we have in particular
where $v(\xb7)$ is the permutation defined by the following:
Because only two terms in [link] are different, we have from [link]
which, after canceling common terms from each side, gives
Since $v\left(k\right)=u(k+1)$ and $v(k+1)=u\left(k\right)$ this becomes
which is equivalent to [link] . Therefore, to find the best factorization of ${\otimes}_{i}{A}_{i}$ it is necessary only to compute the ratios $({r}_{i}-{c}_{i})/{\mathcal{Q}}_{i}$ and to order them in an non-decreasing order. The operation ${A}_{i}$ whose index appears first in this list is applied to the data vector $x$ first, and so on
As above, if ${r}_{u(k+1)}>{c}_{u(k+1)}$ and ${r}_{u\left(k\right)}<{c}_{u\left(k\right)}$ then [link] is always true. Therefore, in the most computationally efficientfactorization of ${\otimes}_{i}{A}_{i}$ , all matrices with fewer rows than columns are always applied to the data vector $x$ before any matrices with more rows than columns.If some matrices are square, then their ordering does not affect the computational efficiency as longas they are applied after all matrices with fewer rows than columns and before all matrices with more rows than columns.
Once the permutation $g(\xb7)$ that minimizes $\mathcal{C}$ is determined by ordering the ratios $({r}_{i}-{c}_{i})/{\mathcal{Q}}_{i}$ , ${\otimes}_{i}{A}_{i}$ can be written as
where
and where $\gamma (\xb7)$ is defined by
A Matlab program that computes the permutation
that describes the computationally most efficientfactorization of
${\otimes}_{i=1}^{n}{A}_{i}$ is
cgc()
.
It also gives the resulting computational cost.It requires the computational cost of each of the
matrices
${A}_{i}$ and the number of rows and columns
of each.
function [g,C] = cgc(Q,r,c,n)% [g,C] = cgc(Q,r,c,n);% Compute g and C
% g : permutation that minimizes C% C : computational cost of Kronecker product of A(1),...,A(n)
% Q : computation cost of A(i)% r : rows of A(i)
% c : columns of A(i)% n : number of terms
f = find(Q==0);Q(f) = eps * ones(size(Q(f)));
Q = Q(:);r = r(:);
c = c(:);[s,g] = sort((r-c)./Q);C = 0;
for i = 1:nC = C + prod(r(g(1:i-1)))*Q(g(i))*prod(c(g(i+1:n)));
endC = round(C);
The Matlab program
kpi()
implements the Kronecker
product
${\otimes}_{i=1}^{n}{A}_{i}$ .
function y = kpi(d,g,r,c,n,x)
% y = kpi(d,g,r,c,n,x);% Kronecker Product : A(d(1)) kron ... kron A(d(n))
% g : permutation of 1,...,n% r : [r(1),...,r(n)]
% c : [c(1),..,c(n)]% r(i) : rows of A(d(i))
% c(i) : columns of A(d(i))% n : number of terms
for i = 1:na = 1;
for k = 1:(g(i)-1)if i>find(g==k)
a = a * r(k);else
a = a * c(k);end
endb = 1;
for k = (g(i)+1):nif i>find(g==k)
b = b * r(k);else
b = b * c(k);end
end% y = (I(a) kron A(d(g(i))) kron I(b)) * x;
y = IAI(d(g(i)),a,b,x);end
where the last line of code
calls a function that implements
$({I}_{a}\otimes {A}_{d\left(g\right(i\left)\right)}\otimes {I}_{b})x$ .
That is, the program
IAI(i,a,b,x)
implements
$({I}_{a}\otimes A\left(i\right)\otimes {I}_{b})x$ .
The Matlab program
IAI
implements
$y=({I}_{m}\otimes A\otimes {I}_{n})x$
function y = IAI(A,r,c,m,n,x)
% y = (I(m) kron A kron I(n))x% r : number of rows of A
% c : number of columns of Av = 0:n:n*(r-1);
u = 0:n:n*(c-1);for i = 0:m-1
for j = 0:n-1y(v+i*r*n+j+1) = A * x(u+i*c*n+j+1);
endend
It simply uses two loops to implement the $mn$ copies of $A$ . Each copy of $A$ is applied to a different subset of the elements of $x$ .
The command $I\otimes A\otimes I$ where $\otimes $ is the Kronecker (or Tensor) product can be interpreted as avector/parallel command [link] , [link] . In these references, the implementation of thesecommands is discussed in detail and they have found that the Tensor product is“an extremely useful tool for matching algorithms to computer architectures [link] .”
The expression $I\otimes A$ can easily be seen to represent a parallel command:
Each block along the diagonal acts on non-overlapping sections of the data vector - so that each sectioncan be performed in parallel. Since each section represents exactly the same operation, this form isamenable to implementation on a computer with aparallel architectural configuration. The expression $A\otimes I$ can be similarly seen to represent a vector command, see [link] .
It should also be noted that by employing `stride' permutations, the command $(I\otimes A\otimes I)x$ can be replaced by either $(I\otimes A)x$ or $(A\otimes I)x$ [link] , [link] . It is only necessary to permute the input and output.It is also the case that these stride permutations are natural loading and storing commands for some architectures.
In the programs we have written in conjunction with this paper we implement the commands $y=(I\otimes A\otimes I)x$ with loops in a set of subroutines. The circular convolution and prime length FFT programswe present, however, explicitly use the form $I\otimes A\otimes I$ to make clear the structure of the algorithm, to make themmore modular and simpler, and to make them amenable to implementation on special architectures.In fact, in [link] it is suggested that it might be practical to develop tensor product compilers.The FFT programs we have generated will be well suited for such compilers.
Notification Switch
Would you like to follow the 'Automatic generation of prime length fft programs' conversation and receive update notifications?