<< Chapter < Page Chapter >> Page >

Beginning of the getMelody method

This method returns an array containing three-seconds of a pure sinusoidal tone. When this array is processed, a 1000 Hz tone is emitted with equal amplitude from the left and right speakers. It is interesting to compare this sound with the sound of a square wave with the same fundamental frequency.

Listing 2 shows the beginning of the overridden getMelody method. (Recall that an abstract version of this method is inherited from the class named AudioSignalGenerator02 -- see Listing 7 )

With one exception, the code in Listing 2 is essentially the same as the corresponding WhiteNoise code from the earlier module. The exception is the statement that declares avariable named freq and sets its value to 1000.0. As you will see later, the value stored in this variable establishes the frequency of thesinusoidal tone. Beyond that, I won't discuss the code in Listing 2 any further.

Listing 2 . Beginning of the getMelody method.
byte[] getMelody(){//Recall that the default is channels=1 for monaural. System.out.println("audioParams.channels = " + audioParams.channels);//Each channel requires two 8-bit bytes per 16-bit sample.int bytesPerSampPerChan = 2;//Override the default sample rate. Allowable sample rates are 8000,11025, // 16000,22050,44100 samples per second.audioParams.sampleRate = 8000.0F;// Set the length of the melody in seconds double lengthInSeconds = 3.0;//Set the frequency of the tone.double freq = 1000.0;//Create an output data array sufficient to contain the tone // at "sampleRate" samples per second, "bytesPerSampPerChan" bytes per// sample per channel and "channels" channels. melody = new byte[(int)(lengthInSeconds*audioParams.sampleRate*bytesPerSampPerChan*audioParams.channels)];System.out.println("melody.length = " + melody.length);

Required audio data format

I explained the required format of the audio data in the melody array in an earlier module. I will repeat that explanation here for convenience.

Given the values that we are using in the AudioFormatParameters01 object, the format requirements for monaural and stereo are shown below. (Note that in both cases, each audio value must be a signed 16-bit value decomposed into a pair of 8-bit bytes.)

Monaural, channels = 1

For mono, each successive pair of bytes in the array must contain one audio value. The element with the lower index must contain the most significant eightbits of the 16-bit audio value.

Stereo, channels = 2

For stereo, alternating pairs of bytes must each contain one audio value in the same byte order as for mono. One pair of bytes is routed to the left speakerand the other pair of bytes is routed to the right speaker (almost) simultaneously.

Within the four bytes, the pair with the lowest index is routed to the left speaker and the other pair is routed to the right speaker.

I will also remind you that the code in the SquareWave class used bit shifting and casting to decompose the short value into a pair of byte values. We will accomplish that in a different and somewhat simpler way in this module.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Accessible objected-oriented programming concepts for blind students using java. OpenStax CNX. Sep 01, 2014 Download for free at https://legacy.cnx.org/content/col11349/1.17
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Accessible objected-oriented programming concepts for blind students using java' conversation and receive update notifications?

Ask