A while back somebody asked me about doing screen captures in my impack plugin, so I took a swing at it. Using the OpenGL “read pixels” call directly got me pretty far in Windows and Mac (with minor issues like extra padding and being upside-down), but gave me back nothing on mobile. On Android it looks like you need to use Bitmap (which seems to be Java only, with no NDK counterpart… stopped me cold!) and on iOS something with UIImage.
This gave me new respect for display.capture() and friends. :) Presumably those must fire after all display objects are done drawing, but before everything else, so to handle that “everything else” you would basically need to emulate the capture APIs using the stuff mentioned above, with all the grisly per-platform details these entail.
Once you’ve gotten that squared away, you have another problem: you have no way to schedule your capture when “everything else” is happening.
It might be possible to discover the native objects and then subscribe to their events, such as rendering. If so, then you’re in luck. But I don’t know enough to say how feasible and / or difficult this is.
I’m also not sure if the top-level objects render into the same video memory as your Corona program; I haven’t checked. If they do, the results could be captured at the start of the next frame. The trouble then is that Corona clears the screen and again you don’t have a way to jump in before this happens.
If everything is together in memory, a “capture before frame is cleared” API could actually be a decent compromise, and probably not too much trouble to implement.
Anyhow, I brought up my libVLC stuff above because it’s trivial to grab the video contents if you’re the one serving them in the first place! 