<< Chapter < Page Chapter >> Page >

In order facilitate on the fly sound analysis requiring demanding DSP algorithms, we implemented a parallel three-stage pipelined sound engine that allows sound events to be time correlated to in game events. This pipelined implementation allows the analysis of the song to be performed on chunks of the song simultaneously along with loading new chunks from the input device and playing already analyzed chunks. This method gives the analyzer ample time to perform the analysis, as long as that analysis takes less time to perform than the chunk it is analyzing takes to play. Because each stage of the analysis (the Loader, the Analyzer, and the Player) each run in their own threads, this system can take advantage of multiprocessor systems such as the multi-core processors found in most of today's computers.

The pipelined sound engine

The full sound engine pipeline system.
The full sound engine pipeline system.
  • Stage 1 - Loader: This stage translates the input song file from its source format (e.g. MP3 or WAV) to an internal uncompressed format and buffers the result into buffer chunks that will be set to other to other pipeline stages.
  • Stage 2 - Analyzer: This stage takes the uncompressed buffer chunk and performs various bits of time domain and frequency domain analysis, generating a chunk of time-correlated events and the buffer of song that goes with those events.
  • Stage 3 – Player: This stage takes the sound and event buffers from the Analyzer and plays them, correlating any events to what is currently being played.

Each stage communicates with the next via an adapter with a loadBuffer method.This method blocks the the current stage of the pipeline until the next stage of the pipeline is ready to receive the buffer. In this way, the pipeline will fill up as much as possible while the player stage is processing the buffer. The player also controls the speed of the pipeline indirectly through this loading method. Great care was taken to minimize the time the transfer of each buffer takes. Rather than copying the buffer from each stage to the next, each stage instead just passes on control of its buffer to the next stage, allocating a new buffer entirely. This allocation process is much faster than copying the data, though if the allocator is bad, it can use more memory than is necessary. Finally, each time a stage of the pipeline blocks to wait until the next stage is done processing or blocks waiting for new data, it relinquishes control of the processor resources, allowing the other stages to run at full speed.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elec 301 projects fall 2013. OpenStax CNX. Sep 14, 2014 Download for free at http://legacy.cnx.org/content/col11709/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2013' conversation and receive update notifications?

Ask