<< Chapter < Page Chapter >> Page >
void sfft_fcf16_hc(sfft_plan_t *p, const void *vin, void *vout) {   const SFFT_D *in = vin;  SFFT_D *out = vout;   SFFT_R r0_1,r2_3,r4_5,r6_7,r8_9,r10_11,r12_13,r14_15;    L_4_4(in+0,in+16,in+8,in+24,&r0_1,&r2_3,&r8_9,&r10_11);   L_2_4(in+4,in+20,in+28,in+12,&r4_5,&r6_7,&r14_15,&r12_13);   K_N(VLIT4(0.7071,0.7071,1,1),      VLIT4(0.7071,-0.7071,0,-0),       &r0_1,&r2_3,&r4_5,&r6_7);   K_N(VLIT4(0.9239,0.9239,1,1),      VLIT4(0.3827,-0.3827,0,-0),       &r0_1,&r4_5,&r8_9,&r12_13);   S_4(r0_1,r4_5,r8_9,r12_13,out+0,out+8,out+16,out+24);  K_N(VLIT4(0.3827,0.3827,0.7071,0.7071),       VLIT4(0.9239,-0.9239,0.7071,-0.7071),      &r2_3,&r6_7,&r10_11,&r14_15);   S_4(r2_3,r6_7,r10_11,r14_15,out+4,out+12,out+20,out+28);}
Hard-coded VL-2 size-16 FFT

Scalability

So far, hard-coded transforms of vector length 1 and 2 have been presented. On Intel machines, VL-1 can be used to compute double-precision transforms with SSE2, while VL-2 can be used to compute double-precisiontransforms with AVX and single-precision transforms with SSE. The method of vectorization presented in this chapter scales above VL-2, and has been successfully used to compute VL-4 single-precision transforms with AVX.

The leaf primitives were coded by hand in all cases; VL-1 required L_2 and L_4 , while VL-2 required L_2_2 , L_2_4 , L_4_2 and L_4_4 . In the case of VL-4, not all permutations of possible leaf primitive were required – only 11 out of 16 were needed for the transforms that were generated.

It is an easy exercise to code the leaf primitives for V L 4 by hand, but for future machines that might feature vector lengths larger than 4, the leaf primitives could be automatically generated (in fact, "Other vector lengths" is concerned with automatic generation of leaf sub-transforms at another level of scale).

Constraints

For a transform of size N and leaf node size of S ( S = 4 in the examples in this chapter), the following constraint must be satisfied:

N / V L S

If this constraint is not satisfied, the size of either VL or S must be reduced. In practice, VL and S are small relative to the size of most transforms, and thus these corner cases typically only occur for very small sized transforms. Such an example is a size-2 transform when V L = 2 and S = 4 , where in this case the transform is too small to be computed with SIMD operations and should be computed with scalar arithmetic instead.

Performance

Single-precision, SSE (VL-2)
Double-precision, SSE (VL-1)
Single-precision, AVX (VL-4)
Double-precision, AVX (VL-2)
Performance of hard-coded FFTs on a Macbook Air 4,2.

[link] shows the results of a benchmark for transforms of size 4 through to 1024 running on a Macbook Air 4,2. The speed of FFTW 3.3 running in estimate and patient modes is also shown for comparison.

FFTW running in patient mode evaluates a huge configuration space of parameters, while the hard-coded FFT required no calibration.

A variety of vector lengths are represented, and the hard-coded FFTs have good performance while N / V L 128 . After this point, performance drops off and other techniques should be used. The following sections use the hard-coded FFT as a foundation for scaling to larger sizes of transforms.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Computing the fast fourier transform on simd microprocessors. OpenStax CNX. Jul 15, 2012 Download for free at http://cnx.org/content/col11438/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Computing the fast fourier transform on simd microprocessors' conversation and receive update notifications?

Ask