The only compression the sdk will do is built into the display.save / capture functions, and there’s no user controls on the actual compression level. I believe you can change the basic compression scheme by naming your file .jpg or .png (png is lossless in Corona I believe, jpg has some default compression set), but that’s about it.
The only other lever on the filesize I know of is based on the pixel dimensions of your saved image. You will need to pay attention to this, because if you are using content scaling - the capture still happens at device resolution. This means that if you capture fullscreen on an ipad3 (retina) the saved image will be HUGE. But if the same code captures on an old iphone 3gs, there would be a lot less pixels (only 320x480).
Note that if you are using content scaling, there are some things you can do to adjust your image relative to the devices actual resolution - in particular, adjusting your image size based on the display.contentScaleX,Y variables might be helpful to get more consistent across platform image size…
Another note - the reason the sdk didnt render the next screen frame is because the thread your code was running on was still active. If you put a timer.performWithDelay in there to break up your code a little, for even 1 ms, the sdk enterFrame/render code might have kicked off (if it was time for the next frame).
Basically, your code to capture and save and then clean itself up could take 5000 ms (5 seconds straight), and the sdk wouldn’t fire it’s next renderFrame until your code exited from it’s processing (in your current code, until your photo code hits its final closing brace/return).
It’s just the way the sdk works - it won’t render in the middle of ANY of your code, until your code thread exits (pause your code with a timer.performWithDelay, and boom, the sdk has the chance to render).