<< Chapter < Page Chapter >> Page >

Owing to the importance of efficiently computing FFTs in signal processing and other areas, there have been many implementations for microprocessors; FFTW's benchmark software, for example, includes a collection of 25 different FFT implementations. However, of the many implementations, only a few have competed with the state of the art over the last fifteen years. Since its first release in1997, FFTW has risen to become one of the most well known fast Fourier transform libraries. Other libraries reviewedin this chapter are SPIRAL, UHFFT, djbfft, Apple vDSP, MatrixFFT, and Intel IPP.

The “fastest fourier transform in the west” (fftw)

FFTW  [link] , [link] , [link] is an implementation of the DFT that attempts to automatically adapt to the hardware in order to maximise performance, and its development in 1997 waspredicated on the idea that it had become too complicated to optimize the performance of the fast Fourier transform for modern microprocessors.

The latest release of FFTW, version 3.3, generates a library of over 150 “codelets” at compile time. The codelets are fragments of machine-independentstraight-line code derived from DFT algorithms, including the Cooley-Tukey  [link] algorithm and its derivatives the split-radix  [link] , [link] , conjugate-pair  [link] , [link] and mixed-radix algorithms. Radar  [link] and Bluestein  [link] , [link] , [link] algorithms are used for sizes that are prime, and the prime-factor algorithm  [link] , [link] for sizes that are factored by co-primes. At runtime, a plan for a specific problem, e.g., 1024 point 1D forward double precision out-of-placeDFT, is generated by searching the huge space of possible codelet configurations for the best solution.

The codelet generator operates in four phases: creation, simplification, scheduling, and unparsing (code generation). During creation, the codelet generator produces a representation of the computation in the form of a DAG. The DAG is expressed in terms of complex numbers  [link] , and can be viewed as a linear network  [link] . In the simplification stage, algebraic transformations and common subexpression elimination rewriting rules  [link] are applied to each node of the DAG, which is then topologically sorted to produce a schedule. In a 2008 paper  [link] , Johnson and Frigo contend that “the compiler needs help with such long blocks of code", and an earlier paper from 1999  [link] is cited to support the hypothesis that compilers are not capable of efficiently allocating registers and scheduling code for hard-coded blocks of about size 64, which compares an earlier version of FFTW compiled with an older compiler Sun WorkShop Compilers 4.2 30 Oct 1996 C 4.2 to an FFT from Sun's Performance Library. There is no mention of re-testing the aforementioned hypothesis with more advanced compilers.

FFTW has several modes available for searching the configuration space of codelets. In “patient” mode, FFTW uses dynamic programming to evaluate theruntime of almost all combinations of possible plans. As the runtime of many sub-problems is repeatedly evaluated while searching the configuration space,the results of locally optimized sub-problems are cached, reducing runtime of the planner while producing results very close to that ofan exhaustive search.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Computing the fast fourier transform on simd microprocessors. OpenStax CNX. Jul 15, 2012 Download for free at http://cnx.org/content/col11438/1.2
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Computing the fast fourier transform on simd microprocessors' conversation and receive update notifications?

Ask