Feel the Beat? It can show it!

I finally managed to get my LEDCube to respond to audio from both line in and microphone input! This means that I can now have the LEDCube sit in a room and react to the conditions without needing to do any changes to it directly. This is doable due to the fast fourier transform (FFT), which is an algorithm that runs in digital systems to get the frequency out of a sample of audio. The FFT has a few limitations though. As it is a sample based system, it can only detect frequencies up to half the sample rate due to needing a minimum of 2 points to recreate a sine wave with averaging (this is called Nyquist Theorem if you want to learn more). Because of this, and the limitation of how fast I can have the analog read run, I can only get frequencies up to 2kHz in the cube's response. This is the equivalent of B6 for music, or the B below 3 octaves above middle c. This is out of standard music range, but is where the overtones of instrument detection occur.

The FFT algorithm is one of the more expensive basic audio algorithms to run, as it needs to do math in the complex plane. If you remember pythagorean theorem (a^2 + b^2 = c^2), the complex plane is made up of numbers that rely on changing their values through that, and square root is very expensive time wise to run in a computer. 

I managed to get the FFT algorithm to work on my cube within the time limit allowed to maintatin a 4kHz sample rate by having the samples occur every 250 us, whenever the cube is not redrawing a layer or conducting an FFT. This causes my FFT to be a bit more noisy on the output, but as I am not using the FFT as an analysis tool, but as a visual effect, the distortion is not that bad. 


You need to log in to post comments