Sound and audio formats - Best Practices

Ok people bare with these questions may have been asked before but from searching only found contradicting information. Hopefully this thread can clear things up for me and help others.

I have been making a arcade style shooter which has lots of explosions, some times a chain of explosions up to 50 which I know is more than the available channels of audio (32), each explosion triggers a audio sample. When testing on various android devices I get a different reactions from the devices:

  1. The device just does its best and plays multiple instances of the explosion sound and continues happily.

  2. As soon as 3+ of the same file is triggered the audio begins to heavily stutter.

Now I know that my code is half of the reason for the stuttering but do find it strange that some devices handle it happily with 20+ of a single sample played whilst others topple with 3+.

Does any one have some knowledge and experience of this themselves?
Secondly what are the best formats to use for iphone vs android.

Here is what I think is the best from research:

Iphone: music: mp3 128kbps (audio.stream), fx: wav. mono 22050kHz 16-bit (audio.load)

This is where I am not very sure, the docs suggest ogg is best but I thought that using ogg means the sound has to be decoded and hardware PCM is not used?

Android: music: ogg 128 kbps(audio.stream), fx ??? ogg or wav.

Can people please come into the discussion with what they think is the best way to deal with audio formats on various hardware?
[import]uid: 118379 topic_id: 21946 reply_id: 321946[/import]

The Audio Notes page talks about performance:
http://developer.anscamobile.com/partner/audionotes
[import]uid: 7563 topic_id: 21946 reply_id: 87262[/import]

Yes ive read it several times, so what I understand is PCM wav is the best for ios as the phone does not have to decode the audio?

Maybe I have misunderstood as it does mention that its only a worry for for audio.stream sounds, does the phone decode the audio during the audio.load phase into the RAM meaning it no longer has to decode?

This is what im trying to clear up, also android is not mentioned in the write up? On the android build tips [age it mentions ogg is the best. Just want some confirmation my logic is correct.

Can any one give a easy set of rules to stick to and i’ll be happy with that, I dont fancy going down one route then having redo all my samples?

Many Thanks. Your knowledge is appreciated. [import]uid: 118379 topic_id: 21946 reply_id: 87269[/import]

Yes, OpenAL only handles linear PCM so everything gets decoded to that.

audio.loadSound decodes the entire file into linear PCM and holds it completely in RAM so there is no more decoding.

For audio.loadStream, decoding is done as you go. For performance, there is a fight between CPU cycles needed to decode the format and I/O overhead for reading the file. Non-compressed linear PCM files (like .wav) have the lowest CPU needs, but will have to transfer the most I/O. Conversely, AAC/MP4, MP3, Ogg Vorbis are going to need the most CPU, but will be the smallest files. Usually, CPU cycles are the more important factor, but not always. (This is another thing that makes Android fragmentation a pain.) If you want highly compressed audio on Android, your only options are currently Ogg Vorbis and MP3. I haven’t done enough benchmarks on the two. My hunch is Ogg Vorbis only because our MP3 library didn’t build out of the box for us, whereas Ogg Tremor (integer-only implementation of Vorbis) did.*
I guess the Android specifics are a bit buried probably because there is nothing nice to say.

In the comments on this page, I include links to other sources that talk about how bad Android audio is.
http://blog.anscamobile.com/2011/07/the-secretundocumented-audio-apis-in-corona-sdk/
Android is all over the map in performance. Generally, faster, premium devices perform better and cheap devices perform terribly.

And the inability (or sometimes unwillingness) of people upgrade beyond Android 2.2 is a huge obstacle for us to move to Google’s newly blessed OpenSL ES audio backend.

*Note: The performance characteristics will probably change once we can move to an OpenSL ES backend. Presumably, we will finally be able access Android’s native decoders instead of bundling our own.

[import]uid: 7563 topic_id: 21946 reply_id: 87272[/import]

ewing you nailed it for me, thank you very much for the explanation. Id had my logic round the wrong way and had a feeling it was wrong. Glad I asked.

So if im saving on app size compress the audio that’s being called by audio.load. :slight_smile:

Consider CPU/Hardware utilisation for audio.stream sounds.

Great stuff. Cheers. [import]uid: 118379 topic_id: 21946 reply_id: 87273[/import]

Thank you, Eric @ewing, for the detailed explanation. Thank you @Beloudest for asking the question. It’s very helpful.

Naomi [import]uid: 67217 topic_id: 21946 reply_id: 87342[/import]