<< Chapter < Page Chapter >> Page >
The results to several trials in varying cases.

Results

Note: all supporting code needed for the main Matlab codes to work is attached below

Ideal case: matlab

Like last year, fastICA works well in MATLAB. For example, mixing and then separating a siren and a voice using MATLAB exclusively works quite well as can be seen in the figure below. fastICA in this very ideal environment was able to separate the two mixed signals into the independent sources of a voice, the lower left spectrogram, and a siren, the lower right spectrogram.

For a better grasp of our results, here are the sound files of the mixed signal, isolated siren, and isolated voice, respectively. Also, the MATLAB code used for this trial is also attached after the sound files.

Real time acoustic mixing case: fast ica used

Performing this same feat using an actual microphone, however, fails. For example, we recorded two sources, voice and tone, simultaneously with two microphones which resulted in the spectrograms of the two mixed signals shown below. Once these mixed signals were passed through the fastICA algorithm, the results were terrible source isolation. As you can see by the lower two spectrograms of the figure below, the independent components look almost the same as the mixed signals we started out with.

Here are the sound files for the two mixed signals and the two "separated sources". The code use to carry out this trial is last.

Real time acoustic mixing: stfica or fastica used

We conjecture two reasons to explain why fastICA is unsuccessful in this scenario.

First, atmospheric and room conditions will change the signal using convolutive operations, rather than the scaling ones that fastICA implements. Second, the characteristic response of the microphones both changes the signals and varies from microphone to microphone, introducing both inaccuracy and imprecision. The original ICA technique, fastICA, does not automatically account for these deviations. Also, although the fastICA does implement a single stage of prewhitening, it may not be enough to alter the input mixed signals so that they look independent of one another in time and space, therefore satisfying the fastICA assumption of independent inputs. So we decided to use the STFICA model in order to account for the convolutive matrix involved and to allow for a user-specifiable number of prewhitening stages.

It was at this time that we experimented with the number of prewhitening stages by setting an iteration level and then watching the output spectrograms for each iteration. Our group could not find a pattern or relation between the iteration number of the prewhitening and the effectiveness of the source isolation, but it was definitely observed that more than stage helps in the source isolation process. Sometimes one iteration would result in some separation, and then the next few iterations did not result in any source separation at all.

Using the STFICA algorithm in some real world cases worked out better than the original fastICA procedure. In one experiment, we produced a pure tone and recoded the source with two microphones. The expected sources that would be isolated were the tone and any ambient noise. The mixed signal of each of the two microphones was passed through the fastICA code and also separately through the STFICA code for comparison. Even with this very simple case, fastICA produced poor results as can be seen in the middle two spectrograms of the output independent components. The spectrograms look almost identical to the original mixed signals that were the inputs. STFICA, on the other hand, separated the pure tone from the white noise exceptionally well. As can be seen in the last row of the figure, the tone (located on the bottom left) was well isolated from the ambient white noise (spectrogram on the bottom right).

Here are the sound files for the two mixed signals, the two "separated signals" produced by fastICA, and the two separated components produced by the STFICA.

In more complicated situations where the sources were multiple human speakers, a human speaker and a tone, or other, we did not achieve the same success. The modified algorithm sometimes made one voice more prominent than the other, but it appeared to be doing filtering in a way that was not achieving the desired result. The success here was not as great as with the simple tone with noise case.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Elec 301 projects fall 2008. OpenStax CNX. Jan 22, 2009 Download for free at http://cnx.org/content/col10633/1.1
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Elec 301 projects fall 2008' conversation and receive update notifications?

Ask