Source Code: Spectrum Ring Music Visualizer

I had someone request the source code for my spectrum ring music visualizer, and I figured I may as well just release it here for anyone who wants it.

Update: Note that it will sometimes throw runtime errors in FP10. I haven’t had a chance to look into it, but I’m quite certain it never threw those errors in FP9. I’ll try to fix it when I have a few minutes.

If I remember correctly, it was my first attempt at playing with computeSpectrum and ByteArrays, so the code is likely a hacky mess, but hopefully it’s useful for someone. At the least, it might be fun to take a look at how the visual effects were achieved, and play with the numbers.

You can download the source here. Just drop an MP3 named “music.mp3” in the same directory and it should load and play it. All the code is in the AS file (the FLA is empty), so it should work as an ActionScript project in FlexBuilder.

Grant Skinner

The "g" in gskinner. Also the "skinner".

@gskinner

15 Comments

  1. Badass dude. Thanks for sharing. look forward to playing around with this code for sure.

  2. Cooool stuff man, thanks for posting this up.

  3. Keep up the good work. Thanks.

  4. Andy McDonald July 9, 2009 at 11:30am

    Hi Grant,

    When I click through to the demo the file loads and the music plays but there are no visuals. Also, what version of the Requiem for a Dream theme is that?

    Cheers

  5. Andy – that happened to me once when I had two copies of the SWF open. Maybe try closing all other Flash content in your browser and try again?

    It’s Requiem Overture from The Two Towers.

  6. Thanks Grant!

  7. Andy McDonald July 9, 2009 at 12:29pm

    Yep… that worked – very cool!!!

  8. Very cool.

    Have you considered posting your code to github, instead of only offering a zip file for download? I’d love to be able to browse the code online without having to download, plus see your updates and other people’s forks!

  9. Mathijs Meijer July 11, 2009 at 9:19am

    computeSpectrum, for some reason, tries to make a spectrum from all running instances of the Flash Player (much like System.totalMemory gets memory usage from all running Flash Player instances).

    The only difference is that the computeSpectrum function isn’t allowed to do that (except when the cross domain policy is set in the correct manner), so it throws a security sandbox violation on the other running instances.

    I don’t really know why it is made to have this behaviour, a bug perhaps? I believe it is already listed in the Flash Jira.

    More on the Sound class security model here:

    http://livedocs.adobe.com/flash/9.0/ActionScriptLangRefV3/flash/media/Sound.html

  10. computeSpectrum has so many cumbersome elements that I’ve been trying to clone its behavior so that we won’t ever have to rely on it again. The Sound.extract method can be used to grab the waveform of an individual Sound, regardless of whether or not it’s playing, but so far the method computeSpectrum uses to produce the frequency spectrum remains elusive.

    I’d be happy to work with anyone interested in solving this puzzle. Until Adobe cleans it up, I think a better computeSpectrum should be a free utility class.

  11. Thanks Grant!

    You’ll find this popping up in a ‘Best of the Web: July Roundup’ on Flashtuts (along with a link to your ActionScriptHero interview at FITC Amsterdam)..

  12. I actually think there is either something wrong with the computeSpectrum() method or the documentation is overly vauge, but I haven’t found anyone discussing it anywhere.

    If you analyze the resulting data very closely, something *may* not add up. Basically, I load an MP3, play it, acquire and graph the data (as practically all computeSpectrum() samples do). I then took the MP3, loaded it into a sound editor (I used Audacity). Through very careful comparison, I believe I found the section of the sound in both my Flash sample and in Audacity.

    Since I’m using the default params to computeSpectrum(), I expected 256 samples (looking at, say, the left channel only) at 44.1kHz to represent .0058 seconds, yet according to Audacity, the matching graph is .011 seconds, or about double. Does the “44.1kHz sampling” alternate between left and right channel samples?

    That would at least make the numbers work out. But, that is not how I usally see audio data represented. Usually, the “samples” are a stereo measurement (for stereo sources). In other words, a single measurement retrieves both left and right channel data.

    If that is in fact how computeSpectrum() works, I think I would prefer the docs say that it was using “22kHz stereo sampling”.

    I’d love to read other developers comments on this. I’m certainly open to the fact that I’m making a mistake or misunderstanding, so please correct me.

    I’ve got another inconsistency (or misunderstanding), but don’t want to muddy the waters yet.

  13. I generated some artificial tones (pure sine wave) and can confirm that the “256 samples” represents 22kHz sampling of a single channel.

    So, in my experience this would be referred to as “22kHz stereo sampling” or “22kHz sampling” but not “44kHz sampling”.

    Also, after some experimentation it appears that the retrieved data only updates approximately every 30-40 msec. When you call computeSpectrum(), you are getting approx 10 msec of data. If you call computeSpectrum() again after, say 10 msec, the data returned does NOT give you the next 10msec of data. The data doesn’t change at all. It requires about 40 msec before you will get any “new data”. The new data corresponds to the data found approx 40msec after the initial data. Obviously you will have missed 30 msec of data in the meantime, so even though you get 22kHz sampled data, it is only “accurate” 25 times per second, making the “resolution of the results” significantly poorer than true 22kHz. One might argue that the actual results are more like 6.4kHz (256 stereo samples / 40 msec = 6400 samples / second).

    This 6.4kHz is like an “effective sampling rate”.

    Basically, the computeSpectrum() method is an extremely basic way of generating some data that is “related” to the current sound. You might find some interesting way to use the data, but it really cannot provide a complete and accurate representation of what sound is being played.

    Anyway, I guess it’s time to let this issue rest, but I thought someone might find it helpful someday.

  14. how do i work with that ?

  15. Cool, very much like

    Request flame spectrum of the source code, thank you very much

    酷,很喜欢!

    请求 火焰 频谱的源代码,非常感谢!

Leave a Reply

Your email address will not be published. Required fields are marked *