About Rainbow Notes

How does the applet work?

The principle behind it is simple: the applet functions as an electronic sheet of music and can, when requested, convert this symbolic information into an array of byte values that will sound like the corresponding music when sent to the loudspeaker of your machine through your WWW browser. (See the sun.audio.* class package info for details.) Performing the actual byte-by-byte conversion is the tricky part, but that's nothing the end-level users will need to concern themselves with.

For each melody, only the symbolic note data needs to be loaded from the server. That's a very compact way of representing sound. So compact, in fact, that I decided to write a Java class for making use of these data files in general applets.

What are the advantages of the Melody class?

Storage space (consequently download time) and to some extent sound quality.

A half-minute tune, taking up more than 230 kilobytes as a normal audio file (or 115 K at 2:1 compression) normally doesn't take more than about 2 K as note data. That's a significant gain if you've got a slow connection or a crowded network. However, to process the notes into audible data once they're in your computer takes a bit of time too (though not as much time as the "making" in Ranibow Notes, where I haven't optimized the algorithm), so the gain won't be quite that dramatic, but even for a run-of-the-mill machine it should beat downloading a normal audio file through a 28.8 modem hands down.

What about the sound quality? Well, since this audio information hasn't been recorded through a physical microphone, it doesn't have any static or any other real-world distortion. It sounds clearer, but then again the things you can create with the melody editor are rather limited and primitive. They might not live up to the coolness you'd want for your own applet sounds. So go write your own melody editor! It's not that difficult.

Here is the source code, by the way.

Oh yeah, and people who -- like myself -- know very little about music and can't handle any instrument may for that reason alone find Rainbow Notes an occasionally useful tool for creating applet melodies.

What does the Melody class do?

It just loads the note data and converts it into an audio byte array. You can then play that by sending it as an AudioDataStream to the AudioPlayer. Each melody object has an InputStream variable called soundStream, which is preferably used for that purpose.
mel.soundStream=new AudioDataStream(new AudioData(mel.rawAudio));
AudioPlayer.player.start(mel.soundStream);
If you'd rather "loop" the melody, create a ContinuousAudioDataStream instead:
mel.soundStream=new ContinuousAudioDataStream(new AudioData(mel.rawAudio));
AudioPlayer.player.start(mel.soundStream);
You can stop it by calling the AudioPlayer.player.stop() method:
AudioPlayer.player.stop(mel.soundStream);

Ingredients

Rainbow notes needs two class files to work: rainbow.class and ActiveCanvas.class. The former is the applet itself and the latter is a help class which handles user input to the curve drawing areas.

It also needs some image data for the notes and stuff: notes.gif.

The ready-made melody files are daisy.txt, mcdonald.txt, bosn.txt willow.txt. To make the applet able to load them properly, insert an applet parameter for each one like this:

<param name="file1" value="daisy.txt">
<param name="file2" value="mcdonald.txt">
<param name="file3" value="bosn.txt">
<param name="file4" value="willow.txt">
The first, "file1", will be the default melody when the applet starts. To make the applet use proper titles for the files, insert corresponding applet parameters like this:
<param name="title1" value="Daisy, Daisy">
<param name="title2" value="Old Macdonald">
<param name="title3" value="The Bos'n Rag">
<param name="title4" value="Weeping Willow">
The same principle applies to whatever melodies you create yourself.

Technical stuff

An AudioDataStream (or ContinuousAudioDataStream) object is based on an array of byte values (0-255) representing sound amplitudes in the range -8192 to 8191. The applet first computes the latter value by adding together sound contributions from all voices (which in turn are products of waveform and envelope values), and then picks the final byte value from a lookup table prepared during initialization. This has to be done 8000 times for each second of resulting music, which is why it takes a bit of time before you get to hear anything.

If we have two voices, red and blue, defined by the waveform and envelope curves

and

and a bar with these notes

the note data is converted into an amplitude curve like this:

(In reality the curve oscillates much more quickly, but this is just a schematic illustration.)

The maximum volume I've given any voice is 4032. This creates a problem: what happens if three or more voice sound data contributions add up to more than 8191 (or less than -8192)? I solve this by first searching through the note data for such occurrences, find the theoretical maximum volume and use that to compute a scale factor I later use throughout the making of the melody. The inevitable result is that if you use many voices at the same time, they will have to "share" the max volume and each one will sound slightly weaker than if they had been played individually.

As previously mentioned, this data in turn needs to be converted to a form the AudioPlayer can understand. Assuming I've got the rules behind the byte value conversion right, this is how it's intended to work:

In rangeOut rangeIn rangeOut range
0-31255-240-32--1112-127
32-95239-224-96--3396-111
96-223223-208-224--9780-95
224-479207-192-480--22564-79
480-991191-176-992--48148-63
992-2015175-160-2016--99332-47
2016-4063159-144-4064--201716-31
4064-8191143-128-8192--40650-15

The maximum volume I've given any voice is 4032. This creates a problem: what happens if three or more voice sound data contributions add up to more than 8191 (or less than -8192)? I solve this by first searching through the note data for such occurrences, find the theoretical maximum volume and use that to compute a scale factor I later use throughout the making of the melody. The inevitable result is that if you use many voices at the same time, they will have to "share" the max volume and each one will sound slightly weaker than if they had been played individually.

You will notice that when you flip between voices, the envelope or waveform changes you make usually won't look exactly the same when you get back. That's because different browsers will assign differently sized drawing areas to these graphs, so it wasn't possible to find a curve data representation with a one-to-one correspondence to what you see on your screen. The applet consequently stores an approximation of what you've drawn, which will turn out slightly different when it shows up again.

Back to the applet