Developing Android Applications with Adobe AIR [67]
The pan value is passed as a second parameter when creating a SoundTransform object:
var soundTransform:SoundTransform = new SoundTransform(1, -1);
The object can be applied to one sound alone via SoundChannel:
var channel:SoundChannel = sound.play(0, 1, soundTransform);
You can also pan all the sounds via SoundMixer:
import flash.media.SoundMixer;
SoundMixer.soundTransform = new SoundTransform(1, -1);
A common technique to pan back and forth between channels is to use a Math.sin function, which returns a value between ‒1 and 1:
var panPhase:Number = 0;
var transformObject:SoundTransform;
var channel:SoundChannel
transformObject = new SoundTransform();
function onEnterFrame(event:Event):void {
transformObject.pan = Math.sin(panPhase);
channel.soundTransform = transformObject;
panPhase += 0.05;
}
Raw Data and the Sound Spectrum
With the arrival of digital sound, a new art form quickly followed: the visualization of sound.
NOTE
A sound waveform is the shape of the graph representing the amplitude of a sound over time. The amplitude is the distance of a point on the waveform from the equilibrium line, also called the time-domain. The peak is the highest point in a waveform.
You can read a digital signal to represent sound in real time using amplitude values.
NOTE
Making Pictures of Music is a project run by mathematics and music academics that analyses and visualizes music pieces. It uses Unsquare Dance, a complex multi-instrumental piece created by Dave Brubeck. For more information, go to http://www.uwec.edu/walkerjs/PicturesOfMusic/MultiInstrumental%20Complex%20Rhythm.htm.
In AIR, you can draw a sound waveform using the computeSpectrum method of the SoundMixer class. This method takes a snapshot of the current sound wave and stores the data in a ByteArray:
SoundMixer.computeSpectrum(bytes, false, 0);
The method takes three parameters. The first is the container ByteArray. The second optional parameter is FFTMode (the fast Fourier transform); false, the default, returns a waveform, and true returns a frequency spectrum. The third optional parameter is the stretch factor; 0 is the default and represents 44.1 kHz. Resampling at a lower rate results in a smoother waveform and a less detailed frequency. Figure 11-1 shows the drawing generated from this data.
Figure 11-1. A waveform (top) and a frequency spectrum (bottom), both generated from the same piece of audio but setting the fast Fourier transform value to false and then to true
A waveform spectrum contains 512 bytes of data: 256 values for the left channel and 256 values for the right channel. Each byte contains a floating-point value between ‒1 and 1, which represents the amplitude of the points in the sound waveform.
If you trace the length of the ByteArray, it returns a value of 2,048. This is because a floating-point value is made of four bytes: 512 * 4 = 2,048.
Our first approach is to use the drawing API. Drawing a vector is appropriate for a relatively simple sound like a microphone audio recording. For a longer, more complex track, we will look at a different approach after this example.
We are using two loops to read the bytes, one at a time. The loop for the left channel goes from 0 to 256. The loop for the right channel starts at 256 and goes back down to 0. The value of each byte, between ‒1 and 1, is multiplied by a constant to obtain a value large enough to see. Finally, we draw a line using the loop counter for the x coordinate and we subtract the byte value from the vertical position of the equilibrium line for the y coordinate.
The same process is repeated every Enter_Frame event until the music stops. Don’t forget to remove the listener to stop calling the drawMusic function:
const CHANNEL_LENGTH:int = 256; // channel division
// equilibrium line y position and byte value multiplier
var PEAK:int = 100;
var bytes:ByteArray;
var sprite:Sprite;
var soundChannel:SoundChannel;
bytes = new ByteArray();
sprite = new Sprite();
var sound:Sound = new Sound();