Sound not working on latest iPad build

Wow! Well, thank you! I’ll upgrade one of our iPad 2 from 4.3.3 to 4.3.5 to see if it is just that device.
[import]uid: 66859 topic_id: 13409 reply_id: 53433[/import]

Ewing:

Thank you for your help on this again! I was able to determine that we had a defective device. I upgraded two other iPads to 4.3.5 and it work as expected.

Thanks!!!

Willy J.
[import]uid: 66859 topic_id: 13409 reply_id: 53803[/import]

Ewing, we’ve been having issues with audio calls on our devices lately too - this behavior has been relatively new. My iPad is running 4.3.5 and my iPad2 is 4.3.3

The audio.xxxx API calls have long had complaints associated with them, is that not a correct statement? The current symptoms right now is that sounds are not being triggered. Our game is under heavy audio load, but we’re not near 32 channels. Now sounds are not triggering when maybe 4 or 5 audio channels are being used.

I’ll be the first to admit, this could be our code. But again, this is recently new behavior and we haven’t been doing any audio modification - for quite some time.

It looks like this thread proved mostly there was a bad hardware unit; but I know “audio” in general has long been a discussion on numerous threads. Can you point me to other threads you might suggest I visit if libraries changed, or API calls changed, etc. (I will keep searching of course myself)

Appreciate any pointers.

EDIT: In our case, we just upgraded SDK from stable 591 to 6xx and our problems have now subsided… we think. FYI [import]uid: 74844 topic_id: 13409 reply_id: 60612[/import]

It might be a technically correct statement, but it is horribly misleading.

Most of the problems reported are not the fault of the audio APIs. The majority of the problems are user error.

The next list of complaints is only for Android. There is sometimes a higher latency cost for using the new audio engine than the old media.playEventSound. This is because audio on Android sucks and isn’t our fault. Google admits to these problems. All Android developers are struggling with this; it’s not just us.

http://code.google.com/p/android/issues/detail?id=3434
http://kile.stravaganza.org/blog/post/android-audio-api-sucks
http://www.badlogicgames.com/wordpress/?p=1315
http://groups.google.com/group/android-ndk/browse_thread/thread/29f88a99dc954c71
http://mindtherobot.com/blog/555/android-audio-problems-hidden-limitations-and-opensl-es/
http://music.columbia.edu/pipermail/portaudio/2010-December/011146.html

This is the only reason we haven’t officially killed the media.play*Sound APIs. media.playEventSound utilizes SoundPool which Google cheats and is (sometimes) able to provide lower latency access with the trade off that the API is extremely limited. For some devices and situations, this API yields better performance than our more powerful audio API. (But on iOS/Mac/Windows, the audio API provides both the best performance and most features.)

And to make things worse, it appears that the Samsung Galaxy 2 crashes in SoundPool which is what media.playEventSound uses.
http://www.pallosalama.com/?p=192
https://groups.google.com/forum/#!topic/android-developers/pAv5UfQgtH4
http://www.android-factory.de/?tag=soundpool
http://groups.google.com/group/android-developers/browse_thread/thread/a40bf951f420b47e?pli=1

So Android developers are screwed either way. This example also demonstrates that problems lie on Android device manufacturers too.

Google wants everybody to move to OpenSL ES and away from SoundPool and AudioTrack. We would like to do this, but Google has said that their underlying latency issues are in the kernel and other places so OpenSL ES doesn’t necessarily fix all the problems. Also, OpenSL ES is Android 2.3+ only, and Google has so far miserably failed to get device manufacturers and end users to upgrade past 2.2. We would love to move to 2.3+ with Corona and drop 2.2, but we don’t think we can until more developers and users are clamoring for it. The Android user base as a whole needs to be more vocal about demanding OS updates, or these types of problems are going to continue to plague Android development.
The next list of issues are usually some other system in Corona that has a bug, but audio gets blamed for the problem. This is partly because the audio engine currently prints a lot of debug messages particularly when assertions fail. When other bugs in Corona happen, they often fail silently and then cascade down to other systems where audio starts printing things so it gets blamed. There are also some cases were the debug messages are non-crtical and the audio system can recover itself.
Then there are OS bugs that people have. These are technically not our fault. We try to work around them when we can, but sometimes it is not possible.

Most recently there is a terrible Apple OpenAL/iOS 5 regression bug. But I lambasted some of you guys about this because people discovered it as early as Beta 4 but didn’t report it to Apple (or us) until the last minute in Beta 7 at which point it was too late to fix for the iOS 5 release. The tragedy is that if somebody would have filed a regression bug with Apple when it was discovered, it would have been fixed. We just provided a workaround in a daily build for this bug, but I consider the workaround scary and also consider us very lucky there was a workaround at al because the regression bug affected a pretty critical core API of OpenAL. But this bug is so bad, that anybody with an app on the store must rebuild their app with the latest daily build and resubmit to the App Store or their app will not work correctly with iOS 5.0.

There was also a bad crashing (multithreading/race condition) bug in Apple’s iOS/OpenAL implementation that a few people were getting bit by. I don’t think anybody ever fully understood why only some people seemed to experience it. But I think Apple finally fixed that one in 4.3.

So this underscores several things:

  • It is imperative you report Apple bugs with Apple (https://bugreport.apple.com)
  • Apple does read bug reports and actually fix bugs (and the more people that complain, the higher priority it gets)
  • All developers should be encouraging users to upgrade to the latest OS version because that’s where the bug fixes appear

Finally, when you do find a bug in Corona, you need to submit bug reports with us. (Link is at the top. And don’t confuse forum posts with a bug report. We don’t always see forum posts.) If you want us to fix it, we need to know about it and to be able to reproduce it. So you should be sending us a simple reproducible test case with assets included. We also need to know which devices and OS’s are affected and which are not. We also want crash logs. And we want to know everything you can tell us about the bug. Also, be aware that most of the audio system is open source, so you can help out too. ALmixer is the primary logic/implementation behind our new audio engine which is open source. On Android and Windows, we use OpenAL Soft as the OpenAL implementation which is also open source. Apple has open sourced their OpenAL implementation for Mac so that is available. iOS is still closed. (I recommend filing a bug/feature request with Apple for them to open it.)

[import]uid: 7563 topic_id: 13409 reply_id: 60676[/import]

Appreciate the effort to respond above there, Ewing.

I would just point out that what is clearly an Apple bug vs. an SDK bug, is not always so crystal clear to the developers - as it might be to Ansca staff. But your point is well taken; filing a bug report when in doubt - is the best way to ensure it’s at least discussed (and maybe dismissed, but still discussed).

Thanks. [import]uid: 74844 topic_id: 13409 reply_id: 60694[/import]

Hi all,

I was intrigued by the fact that others were still experiencing this problem even after re-compiling with one of the later daily builds, so I decided to see if I could find some reason behind this.
After a few hours of digging around some sample code, and comparing with my own, I found one difference that seems to be the reason why I’m not having any audio problems with iOS5.

First I just want to clarify that I’ve only had this issue with streaming audio loaded with audio.loadStream(). As mentioned by Ansca in other threads, this issue is not within Corona SDK.
It’s quite a nasty little regression bug that’s been introduced in iOS5.

The way I’ve gotten around this issue is to reserve channels for all streaming audio. You can do this with audio.reserveChannels(). I’ve used audio.reserveChannels(10) to reserve the first 10 channels for my streaming audio.
What I’ve also done is that I’ve hardcoded the channels that are used for each streaming audio file that I have in my app (background music=channel 1, chatter=channel 2 etc). I’m not sure if that’s necessary, but that’s how I’ve coded my app, and it’s working without any problems.

To test this I had a look at the “Tilt Monster” code and changed it according to what I’ve explained above. It runs perfectly on iOS5 after the changes.

I would also recommend to use AAC encoded files for iOS (although it’s not necessary for this discussion). All iOS devices have built-in hardware decoding of these files (albeit only one at a time). MP3 files need to use the CPU to decode the stream which has a significant impact when running on slower devices like the iPhone 3G or 2nd generation iPod Touch.

I hope this information will help to eliminate the audio issues you’re having.

In my opinion this is quite a serious issue, and I urge all Corona developers to file a bug report with Apple to have them bump up the priority.
In fact, I think it would be a good idea if this was taken up on the Ansca Blog, so that more developers would become aware of the issue and where all developers are urged to file a bug-report to Apple.
If you have the “Tilt Monster” code, you can test the above by editing maingame.lua yourselves.

  1. Search for audio.play

  2. For every line you find where it’s going to play runningSound, gameMusic1, gameMusic2 or gameMusic3, set the channel to a unique channel.
    I used the following:
    gameMusic1 = channel 1
    gameMusic2 = channel 2
    gameMusic3 = channel 3
    runningSound = channel 4

  3. Go to line 4983.
    After physics.setGravity, add the following statement:

audio.reserveChannels(10);
  1. Compile for device and transfer to a device with iOS5 for testing. It should now work as expected.

[import]uid: 70847 topic_id: 13409 reply_id: 61538[/import]

ingemar: Thank you for the detailed post. It sounds like you found a remaining issue I was not aware of. Based on your description, I think I have a vague idea of what might still be broken. (I think it is related to switching between stream and non-stream on the same channel.) The problem is that a critical OpenAL API is broken and does not work as it should. The workaround is funky and fragile. However, I think there is a possibility I might still be able to workaround the bug you are describing here. But I really need a simple reproducible test case for this. Would you please submit a bug with a simple test project (please include assets) and post the bug number here?

Good idea about the blog. I may have to do so when I can get some free time.

[import]uid: 7563 topic_id: 13409 reply_id: 61670[/import]

Great response & post, thank you for sharing!

We’re having an issue where our relatively small game (17MB) is getting “bounced” back to Springboard when we’re loading it onto a an old 3G device… and at the beginning we’re loading a TON of MP3s, so I wonder if this is a problem…?

I dunno, we’re still going to experiment. IF the game loads and runs on the 3G, it performs very well, but the initial boot up - well we’re currently hitting a wall right now. So the comment above about the 3G taking a MP3 processing hit, got me wondering… since this is when we get kicked back to Springboard…?

hmmmmmmmmmmm, we’ll continue our hunt/investigation. [import]uid: 74844 topic_id: 13409 reply_id: 61693[/import]

AAC’s are generally better, but MP3s also get access to the hardware decoder.

For loading, if the main thread is blocked for too long, iOS kills the app. Loading huge amounts of data can cause this to happen. One work around is to break your loading into chunks with timer.performWithDelay. [import]uid: 7563 topic_id: 13409 reply_id: 61711[/import]

@ewing: I’ve made a sample project and attached it to a bug report.
Report ID #2810. (However the email I got stated it’s case #8754)

[import]uid: 70847 topic_id: 13409 reply_id: 61806[/import]

@ewing: Apple have contacted me via email and want some clarification about this whole OpenAL issue. Since I don’t have the technical details of exactly what’s going on, I have some difficulty in providing them more detailed info.

Would it be possible for you to write me an email with the response you would like to give Apple regarding this issue? I could then give them this info and hopefully they could start working on a fix.

You can reach me at: ingemar at swipeware dot com.

Thanks [import]uid: 70847 topic_id: 13409 reply_id: 61910[/import]

I just sent you an email. (Apple also sent me an email. I think they understand your situation.)
[import]uid: 7563 topic_id: 13409 reply_id: 62037[/import]

>>First I just want to clarify that I’ve only had this issue with streaming audio loaded with audio.loadStream(). As mentioned by Ansca in other threads, this issue is not within Corona SDK.
It’s quite a nasty little regression bug that’s been introduced in iOS5.

So to be clear, we’re having streaming audio issues on iOS4 builds, with later SDK 6xx builds. Specifically around:

[it appears that .fadeout function is broken and it does not work correctly. After literally hours and hours of trying to figure out what was going on when I did the options screen rewrite to the tabs style, I tried to implement fadeout… and you think it works… because it fades out… but once it does… if you try any means to bring it back whether through fadein or even just play… it won’t work… it blurps out .5 sec of audio and then never is to be heard from again. In the end… the best reliable effect I could achieve was fadein.]

I remember in scanning the forums that someone talked about fadeout and these symptoms above sound familiar, or maybe I’m just imagining remembering seeing this… I now cannot recall with 100% certainty.

So do the symptoms up above sound plausible, or Ewing is it your position that what is described above in brackets is incorrect? (this is for iOS devices)

Thanks for your help! [import]uid: 74844 topic_id: 13409 reply_id: 62072[/import]

I am not aware of the fade problems you describe. The most common pitfall is programmer error with fading. Fading permanently changes the channel volume. When the fade out is complete, you should be resetting the channel volume back to your desired level if you want to use that channel again. Fade-in fades only to the current set volume which will be 0 after a fade-out.
[import]uid: 7563 topic_id: 13409 reply_id: 62137[/import]

>>Fading permanently changes the channel volume.

Right, no, we grasp that. Obviously some sample code to demonstrate the issue will be necessary - when we get some time…

EDIT: This is where I get to eat some crow from Ewing. :wink: The issue as it turns out is a misunderstanding, and of course once you understand the misunderstanding, it seems silly in hindsight. One of those, “d’uh!” moments. But for the benefit of anyone else who is still wrestling with this, allow me to clarify in what was catching me up - and if I just help one other struggling programmer, well then eating crow is worthwhile.

First, the context of what I was working on was managing my TitleScreen music. Fading it in and out as you transition from screen to screen during support of the wrapper setup screens (ie Options, Scores, etc). First, there is no audio priority management scheme - and as on another thread, the opportunity for someone to share some code module is welcome by people out there I’m sure. However, for the purposes of this discussion, all you need to know is that you must manage your audio channels to manipulate the fade effect.

AUDCH\_TITLESCREENMUSIC = 1  
tsAudio = ("./AUDIO/titlescreenmusic.mp3")  
audChReserved = audio.reserveChannels(1)  
  
tsMusicID = audio.play( tsAudio, { channel = AUDCH\_TITLESCREENMUSIC, loops = -1, fadein = 1000 } )  

You can build off that obviously and reserve more channels for additional dedicated music tracks, or high priority SFX, but you should get the idea. Now, when you transition you might want to fade, and the catch here is that you MUST know which audio channel to influence, and you MUST specify where the volume is going (either fading out, or fading in… FOR THAT CHANNEL!)

-- Fade IN  
audio.fade( { channel=AUDCH\_TITLESCREENMUSIC, time=1000, volume=1.0 } )  
  
-- Fade OUT  
audio.fade( { channel=AUDCH\_TITLESCREENMUSIC, time=1000, volume=0 } )  

*Ewing is probably thinking, "Right, it’s right there in the API documentation.
My response would be that yes, but, where I was getting caught up is I couldn’t figure out how you specified WHICH channels to lock/reserve… until I read that it reserves the first x number of channels when specifying (x) to audio.reserveChannels(x), then self management of those channels is required. Again, seems really obvious after you get it… but we were missing it. Hopefully this helps someone. [import]uid: 74844 topic_id: 13409 reply_id: 62193[/import]

mp3s sometimes inject padding in your samples. You probably should try .wav for comparison. [import]uid: 7563 topic_id: 13409 reply_id: 62520[/import]

@ingemar: when implementing a reserved audio channel strategy, I had a “clock” tictoc firing off every 1000ms and when I DID NOT specify a channel, the clock fired like clock work (no pun intended). Once I started reserving channels and I assigned the clock tictoc SFX to a specific channel, then referenced that channel in the audio.play(snd, {channel=x}), the file had a nasty little “stall” to it and it was late… like an mp3 can have a “pause” at the header. So now trying to implement a channel reserve scheme has brought me back to a screeching halt. Unless there’s some work-around? Is this an MP3 specific issue I wonder?

EDIT: >>So now trying to implement a channel reserve scheme has brought me back to a screeching halt.
Perhaps that’s a bit over-exaggerated, but it does require one of my most important sounds to be on an unreserved audio channel. [import]uid: 74844 topic_id: 13409 reply_id: 62504[/import]

@ewing: converted mp3 to wav, 10k to 100k (ouch), but no impact. There’s still a delay when channel is specified for audio.play() FYI

EDIT: @ewing: does this mean that rapid audio samples that are 1000ms or less in length, it is not possible for the audio engine to keep up with any “clean-up” processes that might need to take place as it relates to audio channel management? I’m assuming the 1sec sample works without channel specification BECAUSE it’s borrowing from other available audio channels when necessary. That’s why this wasn’t ever detected before. [import]uid: 74844 topic_id: 13409 reply_id: 62522[/import]

Generally, the clean up should be reasonably fast, (there are thread locking things going on behind the scenes) but are you on iOS 5.0?

If you are on iOS 5, one of the nasty workarounds we must do is sleep for a small amount of time to avoid an Apple race condition. This may throw off your timing.

If you are on Android, audio latency is in general terrible.

[import]uid: 7563 topic_id: 13409 reply_id: 62537[/import]

No, we are on iOS 4 [import]uid: 74844 topic_id: 13409 reply_id: 62539[/import]