I’m still unclear what the purpose of sampling the entire image pixel-by-pixel serves. i.e. What functionality are you trying to make? What will your app do?
Let’s start talking in gaming terms, you could imagine that a bitmap image was being used like a tiled map.
Eg if there is a red pixel, display a crate, if there is a green pixel, display grass and so on.
Thats just a concept for you: I’m not doing that: there is an extra level.
This is not a game app: it is a utility used for designing a grid of colored squares, like bathroom tiles.
What I need to do is :
open any photo or image that the user selects.
Probably resize that to something smaller (eg to fit inside a 200 x 200 frame or smaller)
Look at each pixel in turn, and find the ‘closest match’ from up to 300 pre-determined RGB values.
If my set of colors are numbered 1 to 300,
The data result will be essentially be a matrix table of integers in the range 1…300 which can be displayed in the form below using the fixed set of RGB values (which I have)
examples:
http://kaamar.com/mosaic-art/mosaic-tiles/otter-face


So to decide which integer represents pixel (34,35) ,
I need to know that the source pixel contains rgb(200,127,54)
And then decide that the nearest available tile color is the one which is rgb(198,124,60) , which I might find at index 12 of my rgbvalues table
And do the same for all the pixels in the image.
The frustrating thing about this is that I see many apps that take a face and do amazing things with the pixels to re-form it, color it, distort it, and so on.
They are lightning fast.
We can use Corona to apply fast filters to images- great - but we have no way of interrogating what we ended up with.