For each melody, only the symbolic note data needs to be loaded from the server. That's a very compact way of representing sound. So compact, in fact, that I decided to write a Java class for making use of these data files in general applets.
A half-minute tune, taking up more than 230 kilobytes as a normal audio file (or 115 K at 2:1 compression) normally doesn't take more than about 2 K as note data. That's a significant gain if you've got a slow connection or a crowded network. However, to process the notes into audible data once they're in your computer takes a bit of time too (though not as much time as the "making" in Ranibow Notes, where I haven't optimized the algorithm), so the gain won't be quite that dramatic, but even for a run-of-the-mill machine it should beat downloading a normal audio file through a 28.8 modem hands down.
What about the sound quality? Well, since this audio information hasn't been recorded through a physical microphone, it doesn't have any static or any other real-world distortion. It sounds clearer, but then again the things you can create with the melody editor are rather limited and primitive. They might not live up to the coolness you'd want for your own applet sounds. So go write your own melody editor! It's not that difficult.
Here is the source code, by the way.
Oh yeah, and people who -- like myself -- know very little about music and can't handle any instrument may for that reason alone find Rainbow Notes an occasionally useful tool for creating applet melodies.
mel.soundStream=new AudioDataStream(new AudioData(mel.rawAudio)); AudioPlayer.player.start(mel.soundStream);If you'd rather "loop" the melody, create a ContinuousAudioDataStream instead:
mel.soundStream=new ContinuousAudioDataStream(new AudioData(mel.rawAudio)); AudioPlayer.player.start(mel.soundStream);You can stop it by calling the AudioPlayer.player.stop() method:
AudioPlayer.player.stop(mel.soundStream);
It also needs some image data for the notes and stuff: notes.gif.
The ready-made melody files are daisy.txt, mcdonald.txt, bosn.txt willow.txt. To make the applet able to load them properly, insert an applet parameter for each one like this:
<param name="file1" value="daisy.txt"> <param name="file2" value="mcdonald.txt"> <param name="file3" value="bosn.txt"> <param name="file4" value="willow.txt">The first, "file1", will be the default melody when the applet starts. To make the applet use proper titles for the files, insert corresponding applet parameters like this:
<param name="title1" value="Daisy, Daisy"> <param name="title2" value="Old Macdonald"> <param name="title3" value="The Bos'n Rag"> <param name="title4" value="Weeping Willow">The same principle applies to whatever melodies you create yourself.
If we have two voices, red and blue, defined by the waveform and envelope curves
and
and a bar with these notes
the note data is converted into an amplitude curve like this:
(In reality the curve oscillates much more quickly, but this is just a schematic illustration.)
The maximum volume I've given any voice is 4032. This creates a problem: what happens if three or more voice sound data contributions add up to more than 8191 (or less than -8192)? I solve this by first searching through the note data for such occurrences, find the theoretical maximum volume and use that to compute a scale factor I later use throughout the making of the melody. The inevitable result is that if you use many voices at the same time, they will have to "share" the max volume and each one will sound slightly weaker than if they had been played individually.
As previously mentioned, this data in turn needs to be converted to a form the AudioPlayer can understand. Assuming I've got the rules behind the byte value conversion right, this is how it's intended to work:
In range | Out range | In range | Out range |
---|---|---|---|
0-31 | 255-240 | -32--1 | 112-127 |
32-95 | 239-224 | -96--33 | 96-111 |
96-223 | 223-208 | -224--97 | 80-95 |
224-479 | 207-192 | -480--225 | 64-79 |
480-991 | 191-176 | -992--481 | 48-63 |
992-2015 | 175-160 | -2016--993 | 32-47 |
2016-4063 | 159-144 | -4064--2017 | 16-31 |
4064-8191 | 143-128 | -8192--4065 | 0-15 |
The maximum volume I've given any voice is 4032. This creates a problem: what happens if three or more voice sound data contributions add up to more than 8191 (or less than -8192)? I solve this by first searching through the note data for such occurrences, find the theoretical maximum volume and use that to compute a scale factor I later use throughout the making of the melody. The inevitable result is that if you use many voices at the same time, they will have to "share" the max volume and each one will sound slightly weaker than if they had been played individually.
You will notice that when you flip between voices, the envelope or waveform changes you make usually won't look exactly the same when you get back. That's because different browsers will assign differently sized drawing areas to these graphs, so it wasn't possible to find a curve data representation with a one-to-one correspondence to what you see on your screen. The applet consequently stores an approximation of what you've drawn, which will turn out slightly different when it shows up again.
Back to the applet