Discard return object of display.capture()

@CoronaLabs
 
I’d like to tell display.capture() not to return a display object since I’m only interested in saving an image to the photo library.
 
As it is now, I’m forced to use the following statements to save a full resolution image to the photo library of the device:

local img = display.capture(fullsizeImage, {saveToPhotoLibrary=true, isFullResolution=true}); img:removeSelf(); img = nil;

 
The problem with the above statements is that display.capture() returns a display object which will never be used, and it also doubles the memory requirements for no usable reason.
In an image editing app this becomes an issue when large full resolution images are loaded. An option to discard the return object would mean that even larger images could be edited and saved without running into memory issues.
 
It would be great if there was an extra option that could be given in the options table telling display.capture() not to create the return display object.
 
 
I’d like to do something like this:

display.capture(fullsizeImage, {saveToPhotoLibrary=true, isFullResolution=true, discardReturnObject=true});

It depends on how they implemented the display.capture, but I’d guess that…

The display.capture call will use the memory in any case, as it needs to crop a (potentially) small section of the display for jpg/png compression, and save the final compressed buffer out… Basically, that’s 2 buffers created during the call (one for the uncompressed cropped area which is to be compressed, the other is the compressed image, ready to be saved). On the other hand, they could have written the compression to crop/compress without using a second buffer - but this is more complicated / takes longer to code than using a cropped buffer as a source for compression…

So the SDK likely returns the uncompressed (cropped) buffer (just used for the compression) in case it would be useful to the app. If CoronaLabs provided the option to remove / nil it before return from display.capture, you’d likely still have a peak memory hit, but it would be limited to just during the actual display.capture call itself, methinks.

Is that about how it works inside display.capture, CoronaLabs?

mpappas is right.  You have no choice but to take the memory hit.  We need to fetch the display’s pixels into memory via OpenGL’s glReadPixels() so that we’ll have the image data needed to write something to file.  The returned display object is merely a wrapper around that capture image in memory.  Also, if you remove the returned object immediately (like you’re doing), then that will prevent Corona from submitting the image in RAM into OpenGL as a texture on the next render pass, preventing the double memory hit you mentioned.  So, how it works today is the optimum solution.

Thanks for the info guys.
I suspected that there were internal reasons for what was being done, but I was throwing an idea out into the wild just in case :slight_smile:

It’s perfectly fine the way it is. Graphics 2 has opened up so many new possibilities for what can be done with Corona.
All I need now is 50 hour days!

It depends on how they implemented the display.capture, but I’d guess that…

The display.capture call will use the memory in any case, as it needs to crop a (potentially) small section of the display for jpg/png compression, and save the final compressed buffer out… Basically, that’s 2 buffers created during the call (one for the uncompressed cropped area which is to be compressed, the other is the compressed image, ready to be saved). On the other hand, they could have written the compression to crop/compress without using a second buffer - but this is more complicated / takes longer to code than using a cropped buffer as a source for compression…

So the SDK likely returns the uncompressed (cropped) buffer (just used for the compression) in case it would be useful to the app. If CoronaLabs provided the option to remove / nil it before return from display.capture, you’d likely still have a peak memory hit, but it would be limited to just during the actual display.capture call itself, methinks.

Is that about how it works inside display.capture, CoronaLabs?

mpappas is right.  You have no choice but to take the memory hit.  We need to fetch the display’s pixels into memory via OpenGL’s glReadPixels() so that we’ll have the image data needed to write something to file.  The returned display object is merely a wrapper around that capture image in memory.  Also, if you remove the returned object immediately (like you’re doing), then that will prevent Corona from submitting the image in RAM into OpenGL as a texture on the next render pass, preventing the double memory hit you mentioned.  So, how it works today is the optimum solution.

Thanks for the info guys.
I suspected that there were internal reasons for what was being done, but I was throwing an idea out into the wild just in case :slight_smile:

It’s perfectly fine the way it is. Graphics 2 has opened up so many new possibilities for what can be done with Corona.
All I need now is 50 hour days!