It’s not that easy. That method only exists if you use Apple’s AVFoundation player (e.g. AVAudioPlayer). None of our audio code uses AVFoundation. It’s not portable (doesn’t even work on Mac Snow Leopard), and AVAudioPlayer doesn’t scale well beyond a few sounds.
Our audio engine uses OpenAL to get performance, portability, and also some special effects. OpenAL is a C based API and there isn’t a single line of Obj-C in our audio implementation.
We understand the usefulness of current time, but it is actually a hard problem because nothing will give us this value. There may be some guesswork involved too.
In OpenAL there are two base cases, fully decoded buffers (e.g. audio.loadSound), and streaming buffers (audio.loadStream). The first case is easy and OpenAL does give us information there.
The problem with streaming is that the file position and the part that you hear playing are out of sync. With streaming, there is file read-ahead going on to buffer up samples in advance to avoid hiccups and stalls. That means the file position can be much further ahead than what’s playing. OpenAL only tells us the position relative what buffers it has in memory. But it doesn’t have a notion of how those buffers relate to your file as a whole.
So to solve this, book keeping has to be done to figure out the OpenAL buffer position and relate it back to the original file. But this gets even harder if the audio is seeked, rewound, or looped.
If you are determined, ALmixer is our interface/implementation on top of OpenAL. It is open source. I am very happy to take submissions.
http://playcontrol.net/opensource/ALmixer/
[import]uid: 7563 topic_id: 19283 reply_id: 74441[/import]