Analyze waveform to spwawn objects in level?

I was wondering if anyone knew of a plugin or if Corona even had a feature to read an audio file, and generate obstacles based on the music using its wavelength. I have looked at the audio visualization example posted a long time ago, but it isn’t exactly what I’m looking for. The best example is Rock Hero. I’m not making a clone of it, just want to implement a feature like that in my game for obstacle spawning. Any ideas, places to start, or examples? Thanks for reading!

EDIT: Spawn* Sorry didn’t even notice that in the title.

Hi @rburns629,

At this time, there’s not a built-in way to analyze an audio track for its wavelength. This would need to be attempted using Corona Enterprise, or alternatively (if you knew in advance which audio files you were going to use) I suppose you could generate some kind of associated data with the sound files in advance and use that within the Lua code.

Take care,

Brent

This isn’t a Corona solution, but you could use the Sample Data Export feature of Audacity (which is a free audio editor). That would export raw text data relating to your audio which you could then use for your spawning timers. Again, this isn’t going to be something you can do ad hoc during gameplay, but it will give you something to evaluate.

Hope this helps!

I have looked at the audio visualization example posted a long time ago, but it isn’t exactly what I’m looking for.

Any more details on that?

If you have a string of discrete samples (obviously a problem in itself :slight_smile: ), you can represent them as a sum of cosine and sine waves:

a1 * cos(0) + b1 * sin(0) + a2 * cos(wavelength1) + b2 * sin(wavelength1) + …

where the wavelengths are multiples of pi, according to how many samples you provide.

b1 is irrelevant but shown for consistency. If all your samples are alike, only a1 will be non-0 and will have that value.

You can figure out all those a1 , b1 , etc. coefficients with stuff like the Discrete Fourier Transform (this is what the “fft2048” is in the audio visualization sample, assuming I’m looking at the one you refer to).

Once you’re in this frequency space it’s easy to do something called convolution (which back in samples-land is a bit like a smearing average of two groups of samples) just by multiplying the complex number pairs (e.g. a1 , b1 ), where your other set of samples would match some reference pattern, like the beat you want to match. Then you can invert the result and see if any of the spikes line up with your samples.

Well since this post game play has changed a little bit. What I am looking to do is close to the example I stated above. Before I was going to generate display objects based on the wavelengths, but now, what I would like to do is scale some display objects based on wavelength. So while playing the level, players will be able to see, lets say clouds in the background, expanding and contracting based on the wavelength so it looks like it is going with the rhythm.

Hi @rburns629,

At this time, there’s not a built-in way to analyze an audio track for its wavelength. This would need to be attempted using Corona Enterprise, or alternatively (if you knew in advance which audio files you were going to use) I suppose you could generate some kind of associated data with the sound files in advance and use that within the Lua code.

Take care,

Brent

This isn’t a Corona solution, but you could use the Sample Data Export feature of Audacity (which is a free audio editor). That would export raw text data relating to your audio which you could then use for your spawning timers. Again, this isn’t going to be something you can do ad hoc during gameplay, but it will give you something to evaluate.

Hope this helps!

I have looked at the audio visualization example posted a long time ago, but it isn’t exactly what I’m looking for.

Any more details on that?

If you have a string of discrete samples (obviously a problem in itself :slight_smile: ), you can represent them as a sum of cosine and sine waves:

a1 * cos(0) + b1 * sin(0) + a2 * cos(wavelength1) + b2 * sin(wavelength1) + …

where the wavelengths are multiples of pi, according to how many samples you provide.

b1 is irrelevant but shown for consistency. If all your samples are alike, only a1 will be non-0 and will have that value.

You can figure out all those a1 , b1 , etc. coefficients with stuff like the Discrete Fourier Transform (this is what the “fft2048” is in the audio visualization sample, assuming I’m looking at the one you refer to).

Once you’re in this frequency space it’s easy to do something called convolution (which back in samples-land is a bit like a smearing average of two groups of samples) just by multiplying the complex number pairs (e.g. a1 , b1 ), where your other set of samples would match some reference pattern, like the beat you want to match. Then you can invert the result and see if any of the spikes line up with your samples.

Well since this post game play has changed a little bit. What I am looking to do is close to the example I stated above. Before I was going to generate display objects based on the wavelengths, but now, what I would like to do is scale some display objects based on wavelength. So while playing the level, players will be able to see, lets say clouds in the background, expanding and contracting based on the wavelength so it looks like it is going with the rhythm.