<< Chapter < Page Chapter >> Page >

The thought process

Initial considerations

The program we initially developed was capable of accepting one vowel sound over a course of one or two seconds. Naturally, we wished to expand our program capacity to work for sequences of vowels with variable separation in time. With the increasing complexity of our project, however, we faced even more situations and problems we had to consider. The first and most important problem we had to consider was whether or not to parse the data into sections where we would "guess" the location of the vowel. The next problem, closely connected with the first, was a method of differentiating noise from actual sound content. A last problem was implementing the method optimally into MATLAB.

For reasons that will become clear soon, we chose to compute vowels by creating a continuous, nonoverlapping partition of the signal instead of just separating out vowel content.

The math

Consider the sample signal below. The actual vowel content is contained within samples 2000-3500. White Gaussian noise has been added between samples ~1000-4500. This superposition represents a combination of both external noise infiltrating the signal and unwanted content in the speech itself. To clarify that last part, consider the following example. In determining the vowel in "hat," we must find a way to deal with 1) external noise, 2) the consonants 'h' and 't', and 3) the transition between a consonant and the vowel and vice-versa.

problematic noise with hat
Sample Signal of the Word 'Hat' with Problematic Noise

If we go the path of taking the chunk between samples 1000 and 4000 and guessing it's the vowel (which is the best we can do in the "parsing" case), we take into consideration a lot of unwanted noise that will affect our formant guesses. Since roughly half the power in this signal is noise, our results may be corrupted. How might we get around this?

We looked to a partitioning method instead, because it seemed noise-effective to a higher degree. Consider taking the entire signal and dividing it into chunks of 500 samples a piece.

Let's say that we iterate through the sequence of chunks and put the following restriction on our method: "Whenever we see that four chunks of our signal match to the same vowel's formants, we say that vowel is definitively in the signal." If we proceed this way, we notice that samples from 1000 to 2000 will yield two formant pairs that *might* be similar, while the samples between 2000 and 4000 yield four very similar formant pairs because of the vowel content. Since the formants identified in the initial noise don't satisfy the initial restriction, we throw them out. Since the formants identified in the signal itself do satisfy the restriction, we keep them as a means to identifying the vowel. While time-"parsing" can't really deal with the complication of noise, it is easy to see that with this new "partition" method, we are effectively filtering out the effects of inconsistent noise (both external and internal) by throwing out any unreliable data.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Vowel recognition using formant analysis. OpenStax CNX. Dec 17, 2014 Download for free at http://legacy.cnx.org/content/col11729/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Vowel recognition using formant analysis' conversation and receive update notifications?

Ask