Character animation, sprite audio lip sync possible?

I’m very new to corona (just downloaded last night :)), reason for learning Corona is that I’m having a lot of hard time optimizing flash content to run smoothly on iOS. All my content seems to run under accepted framerate.
But the fact that i’m very used to create animation especially character animation (with all those nested movieclips goodies) in flash make me wonder: “Is there a way to create lip sync character animation in corona without manually code them?”
I’m fluent in AS 3.0 or java, and lua wont seem to be a problem for me. But i rather make lip sync animation non programatically since I’m planning to create adventure game like “puzzle agent” which include a lot of talking character. I can export the images from flash to corona no problem. The actor sounds also available. The question is how to sync them.
My question is very novice and probably require simple answer. We’ll see. thanks [import]uid: 16488 topic_id: 5137 reply_id: 305137[/import]

no reply yet? :frowning:
I really need to know how to do this.
Or maybe a way to import small res video with alpha channel that has already animation and audio sync to it? So I can animate in any animation software and bring them to corona for character talking cut scene.
Any help here? I will be much appreciate it. [import]uid: 16488 topic_id: 5137 reply_id: 17405[/import]

i think i know what you are referring to — but do you have a demo i can look at to see if i understand correctly?

we added microphone support for corona, and an app uses it to tune their guitar.

is this what you are referring to?

http://www.zefrank.com/frog/frog.html

or actual mp3 file where you need to extract some FFT to sync the sine wave

c
[import]uid: 24 topic_id: 5137 reply_id: 17442[/import]

in Flash IDE you would sync the audio and the animation on the timeline, this appears to be how you’re used to doing it

in Flash Code only environment however you would probably read a datafile (xml etc) specifying animation frames, and check sound.position in a timer to sync the two

Carlos, request for [lua]channel.getPosition()[/lua] :slight_smile:

thanks
j

[import]uid: 6645 topic_id: 5137 reply_id: 17478[/import]

Hi guys thank you for helping, for the demo i think game like “puzzle agent” on iphone/ipad suit well to what im trying to achieve. An adventure game where a lot of talking character happening.

Basically i have sprite sequence and audio. I need a way to play both of them synchronously (sorry if the spelling not right) so talking characters looks natural. [import]uid: 16488 topic_id: 5137 reply_id: 17652[/import]

Hi Jmp909, i think you are right. With feature like sound position i can use some sort of ontick or enterframe event listener and using xml data or array as reference to playback certain sprite animation.

Lipsync data can be generated by freeware like papagayo, all i need is several sprite for mouth position (meauww mouth chart) and voila we have talking lipsync animation generated with only few mouth sprite. I would love to build the code or framework and contribute this to the community if i can and if no one already contribute similar framework.

This would be better than my old way of doing tedious timeline animation in flash pro since everything generated automatically. Would love to see feature like getting sound position on corona. Or is it already available (perhaps something similar) ? Sorry im very new to corona. [import]uid: 16488 topic_id: 5137 reply_id: 17675[/import]

I’ll put a request in the forum for channel.position. it’s available feature in OpenAL 1.1. maybe it can make its way into corona


alxGetSoundOffset( sound_id )
Gets the current playback position of the sound buffer.

This function requires a OpenAL 1.1 or an extension of OpenAL 1.0
[import]uid: 6645 topic_id: 5137 reply_id: 17704[/import]

Thank you so much. Hope this get listed by the dev team [import]uid: 16488 topic_id: 5137 reply_id: 17819[/import]

I’ve never thought about doing lip-sync this way, I hope this feature request happens because this is a neat trick. And thanks for pointing out that software Papagayo, that looks really useful. Does it export breakdown sheets as data that can be read into a program? [import]uid: 12108 topic_id: 5137 reply_id: 17829[/import]

papagayo can export to .dat file which is basically a text file containing what mouth shape happening in what time. We can use simple string operation and convert it to xml or simple 2d array or just copy paste it. I’ve never try it my self though :slight_smile: but i will make some coding to be able to utilize the .dat file when sound position feature available. And would love to contribute it to the community.

But again, i’m very new to corona :), maybe more-experienced corona users can do something more fantastic with these papagayo files? Or even perhaps outo-lipsync directly from live sound data would be super awesome!! (i’m dreaming too much these days)
[import]uid: 16488 topic_id: 5137 reply_id: 17893[/import]

@jmp909, are you sure alxGetSoundOffset is part of OpenAL? Sounds more like OpenAL Xtra which AFAIK is not the same thing. [import]uid: 6787 topic_id: 5137 reply_id: 18707[/import]

Anime Studio has lip-sync, exports png animations and supports Lua scripts. [import]uid: 7356 topic_id: 5137 reply_id: 18719[/import]

The guy who made anime studio and papagayo is actually the same person. previously known as moho then change its name to anime-studio when his old company lost-marble merge with smithmicro. I think the lip-sync has the same core or at least developed based on papagayo.

But papagayo is free and anime studio is commercial. And I’m not sure if I need to animate anything in anime studio if get-sound-position feature already available in corona. With only several mouth sprites you can automate the lip-sync process without importing hundreds of sprites sequence.

But you do need animation package (such as flash pro, toonboom animate, anime studio, etc) if planning to import a whole animation as video playback. This is different story though.

For my case I need a talking character that can interact with menu, physic and or user input (touch or gesture). That is way video playback is not what i’m looking for. [import]uid: 16488 topic_id: 5137 reply_id: 18734[/import]

yeah my bad
http://openalxtra.sourceforge.net/

i was just reading this…
“This function requires a OpenAL 1.1 or an extension of OpenAL 1.0”
[import]uid: 6645 topic_id: 5137 reply_id: 18826[/import]

For interest, jmp909’s other thread:
http://developer.anscamobile.com/forum/2011/01/21/soundposition [import]uid: 7563 topic_id: 5137 reply_id: 18911[/import]