Open Photo, Read RGB of pixel at x,y

Several years ago, I came to Corona , and spent months writing an app which I released into the App Store.

Over time, many of the users demanded the facility to do things with photos from their devices.

There wasn’t any method to get the RGB value of a pixel from an opened photo, and after waiting over 18 months for news that such a feature would be added, I gave up, mothballed the project, and finally abandoned iOS development.

Recently, I decided to ‘have another go’… surely things have improved…

But it seems there is still no way to open a photo, know that pixel(200,200)  for example is a nice red color, and do something about it.

Am I wrong?

Does such a feature exist now?

I have searched the forum for GetPixel , ReadPixel, GetRGB    without success. 

See: https://docs.coronalabs.com/api/library/display/colorSample.html

Rob

Aha.

Thats terrific. I can work with that. It will be interesting to see what the performance is like. ( I do need a typically 200 x 200 image pixel by pixel)

Not sure if that constitutes a tight loop, but theres time to investigate…

Sampling 40,000 pixels in a loop will not be feasible. 

That feature is designed for spot sampling.

Sampling 40,000 pixels in a loop will not be feasible. 

It will be some time before I can do a test on a physical machine - I was planning to get a reasonable prototype tested before jumping through those hoops.

Any idea how unfeasible 40,000 checks would be?

1 minute?  (I’d still do it with an ‘apologies’ message)

20? 

(Dont think any amount of ‘this will take a while’ messages would excuse that)

I’m not sure I see the application of ‘spot sampling’ ,

but if this is a feature that is effectively unusable in the wild , is there a better plugin or library that is feasible?

You might also want to consider this blog post:

https://coronalabs.com/blog/2016/09/22/introducing-external-bitmap-textures/

The memoryBitmap plugin talked about had getPixel() methods that run on an image not in OpenGL space. It’s not that useful for SDK users, but if you can write a loader to read your image and get a bitmap texture created, then using the getPixel() method here will be significantly faster.  I seem to remember @vlads had provided a simple BMP reader at some point. You will probably need to search for that.

Rob

@sales3,

I’m still unclear what the purpose of sampling the entire image pixel-by-pixel serves.  i.e. What functionality are you trying to make?  What will your app do?

The more specific you can be, with your answer, the better the help and suggestions you will likely get.

I’m still unclear what the purpose of sampling the entire image pixel-by-pixel serves.  i.e. What functionality are you trying to make?  What will your app do?

Let’s start talking in gaming terms, you could imagine that a bitmap image was being used like a tiled map.

Eg if there is a red pixel, display a crate, if there is a green pixel, display grass and so on.

Thats just a concept for you: I’m not doing that: there is an extra level.

This is not a game app: it is a utility used for designing a grid of colored squares, like bathroom tiles.

What I need to do is :

open any photo or image that the user selects.

Probably resize that to something smaller (eg to fit inside a 200 x 200 frame or smaller)

Look at each pixel in turn, and find the ‘closest match’ from up to 300 pre-determined RGB values.

If my set of  colors are numbered 1 to 300,

The data result will be essentially be a matrix table of integers in the range 1…300 which can be displayed in the form below using the fixed set of RGB values (which I have)

examples:

http://kaamar.com/mosaic-art/mosaic-tiles/otter-face

e84e567611b380cbcc05798ed1bb3da7.jpg

Otter%20Face%20Mosaic.png

So to decide which integer  represents  pixel (34,35) ,

I need to know that the source pixel contains rgb(200,127,54)

And then decide that the nearest available tile color is the one which is rgb(198,124,60) , which I might find at index 12 of my rgbvalues table

And do the same for all the pixels in the image.

The frustrating thing about this is that I see many apps that take a face and do amazing things with the pixels to re-form it, color it, distort it, and so on.

They are lightning fast.

We can use Corona to apply fast filters to images- great - but we have no way of interrogating what we ended up with.

The problem is that Corona display objects live in OpenGL space meaning it’s in the device’s graphics card pipeline. Getting pixel data there is difficult.  This is why I mentioned the other blog post. What you need to do needs to be done on the CPU side, not the GPU side.

Rob

What you need to do needs to be done on the CPU side, not the GPU side.

That’s beyond me, sadly.

I haven’t found anyone who has used that API to develop a library (eg with Imagemagik ) or being able to open an existing image, by searching the forum.

@saless

Thanks for the detailed explanation. Rob has replied, but let me say that the feature we are talking about: colorSample() is not what you are looking for. 

Additionally, as frustrating as it may be for you, I’m sorry but Corona does not integrate image processing APIs of the variety you are seeking.  However, you can implement what you need via Corona Native.  

I suspect the applications you’ve seen were all native and integrated specialized libraries.  So, you’re not likely to find a game/app engine/SDK out there that will do what you want.  i.e. You’ll need to native code.

The good thing is, if you can get past the initial bump of learning to write apps using Corona Native, you’ll be in the best of both worlds:

  • Have Corona’s awesome dev features available to you, AND
  • Have any native feature you need via your own integration efforts.

In closing, if you can’t do this on your own, you should consider posting a job listing here and pay someone to help you code the app.

Cheers,

Ed (errr… I mean The Roaming Gamer)

Hi.

My impack plugin (and Bytemap which complements it) was built around my own image-editing needs (in particular texture synthesis).

I’ve been doing some much-needed but still-sporadic maintenance on it recently, but it might be interesting to you.

See: https://docs.coronalabs.com/api/library/display/colorSample.html

Rob

Aha.

Thats terrific. I can work with that. It will be interesting to see what the performance is like. ( I do need a typically 200 x 200 image pixel by pixel)

Not sure if that constitutes a tight loop, but theres time to investigate…

Sampling 40,000 pixels in a loop will not be feasible. 

That feature is designed for spot sampling.

Sampling 40,000 pixels in a loop will not be feasible. 

It will be some time before I can do a test on a physical machine - I was planning to get a reasonable prototype tested before jumping through those hoops.

Any idea how unfeasible 40,000 checks would be?

1 minute?  (I’d still do it with an ‘apologies’ message)

20? 

(Dont think any amount of ‘this will take a while’ messages would excuse that)

I’m not sure I see the application of ‘spot sampling’ ,

but if this is a feature that is effectively unusable in the wild , is there a better plugin or library that is feasible?

You might also want to consider this blog post:

https://coronalabs.com/blog/2016/09/22/introducing-external-bitmap-textures/

The memoryBitmap plugin talked about had getPixel() methods that run on an image not in OpenGL space. It’s not that useful for SDK users, but if you can write a loader to read your image and get a bitmap texture created, then using the getPixel() method here will be significantly faster.  I seem to remember @vlads had provided a simple BMP reader at some point. You will probably need to search for that.

Rob

@sales3,

I’m still unclear what the purpose of sampling the entire image pixel-by-pixel serves.  i.e. What functionality are you trying to make?  What will your app do?

The more specific you can be, with your answer, the better the help and suggestions you will likely get.

I’m still unclear what the purpose of sampling the entire image pixel-by-pixel serves.  i.e. What functionality are you trying to make?  What will your app do?

Let’s start talking in gaming terms, you could imagine that a bitmap image was being used like a tiled map.

Eg if there is a red pixel, display a crate, if there is a green pixel, display grass and so on.

Thats just a concept for you: I’m not doing that: there is an extra level.

This is not a game app: it is a utility used for designing a grid of colored squares, like bathroom tiles.

What I need to do is :

open any photo or image that the user selects.

Probably resize that to something smaller (eg to fit inside a 200 x 200 frame or smaller)

Look at each pixel in turn, and find the ‘closest match’ from up to 300 pre-determined RGB values.

If my set of  colors are numbered 1 to 300,

The data result will be essentially be a matrix table of integers in the range 1…300 which can be displayed in the form below using the fixed set of RGB values (which I have)

examples:

http://kaamar.com/mosaic-art/mosaic-tiles/otter-face

e84e567611b380cbcc05798ed1bb3da7.jpg

Otter%20Face%20Mosaic.png

So to decide which integer  represents  pixel (34,35) ,

I need to know that the source pixel contains rgb(200,127,54)

And then decide that the nearest available tile color is the one which is rgb(198,124,60) , which I might find at index 12 of my rgbvalues table

And do the same for all the pixels in the image.

The frustrating thing about this is that I see many apps that take a face and do amazing things with the pixels to re-form it, color it, distort it, and so on.

They are lightning fast.

We can use Corona to apply fast filters to images- great - but we have no way of interrogating what we ended up with.

The problem is that Corona display objects live in OpenGL space meaning it’s in the device’s graphics card pipeline. Getting pixel data there is difficult.  This is why I mentioned the other blog post. What you need to do needs to be done on the CPU side, not the GPU side.

Rob