Enterprise n00b here, was just trying to wire up a screen grab routine…
So, I’m looking at capturing a small part of the screen native iOS… I call from lua into my obj C, pass in a filename to save out, some coords… Everything looks good…
Next I wrote about half a dozen routines to render the screen / save it using various techniques… At best I got an all black or all white image. So after a while of starting to dig into opengl, I print out the gl version and it says “openGL version : ES-CM 1.1”… (from NSLog(@"- openGL version : %s", glGetString(GL_VERSION)) )
So I clean up my code to make it just opengl 1 stuff, and that way I start getting an undeclared error for glReadBuffer() … (a core opengl 1 method?)
Anyways, as opposed to trying to grab the screen with using gl 1, I was wondering if I should wait for 2.0, as I may have to re-write it… ie; if glReadBuffer doesn’t work in this version / is gimped, not sure if everything from a solution would be guaranteed to work in 2.0…
And speaking of a solution, if anyone has any tips on how to grab an image from the corona gl buffer using gl 1.1, just for kicks, I’d appreciate it Following is one of my more generic attempts to grab a screen area and make a UIImage of it (based on a web example)
As another note – Am I missing something, or are there zero examples of how to interface between coronas display system and the OS (particularly iOS)? I couldn’t find any doc (or even forum question) related to using screenshots, graphics or even sharing a pixel between the two…
[lua]
#import <UIKit/UIKit.h>
#import <QuartzCore/QuartzCore.h>
#import <OpenGLES/ES1/gl.h>
#import <OpenGLES/ES1/glext.h>
// filename and coords of area are passed in from corona lua code… although the following just tries to grab anything / everything for now…
GLint backingWidth2, backingHeight2;
//Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you’re dealing with multiple renderbuffers.
// Note, replace “_colorRenderbuffer” with the actual name of the renderbuffer object defined in your class.
//
//Bind the buffers.
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer); // !!! No idea where corona keeps this buffer, or how to access it…
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth2);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight2);
NSInteger x = 0, y = 0, width2 = backingWidth2, height2 = backingHeight2;
NSInteger dataLength = width2 * height2 * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
//glReadBuffer(GL_BACK); // XCODE undeclared identifier – glReadBuffer, despite all the #imports…
glReadPixels(x, y, width2, height2, GL_RGBA, GL_UNSIGNED_BYTE, data); // But xcode likes glReadPixels for some reason…
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width2, height2, 8, 32, width2 * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view’s contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = 2.0; // self.contentScaleFactor;
widthInPoints = width2 / scale;
heightInPoints = height2 / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), YES, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width2;
heightInPoints = height2;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Save the file
NSString *jpgBasename = [[NSString alloc] initWithUTF8String:destFilename];
NSString *jpgPath = [NSTemporaryDirectory() stringByAppendingPathComponent:jpgBasename];
NSLog(@“JPG Path: %@”,jpgPath);
[UIImageJPEGRepresentation(viewImage, 1.0) writeToFile:jpgPath atomically:YES];
NSLog(@“JPEG written (7)?!”);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
// 3. Restore Corona’s context
//[EAGLContext setCurrentContext:coronaContext];
[/lua]