Framebuffer, TextureResources and Snapshots

Hello.

I am completely new to the concept of frame buffers. However, it seems like I need them now.

First, what I want to do: I want to draw a png of a light source, that moves with the mouse cursor, in a scene that uses raycasting for shadows.

Right now, I have a mesh that represents the part of the scene that is illuminated. My research in the internet suggests, that I use frame buffers to solve my problem. Though I have never used them, I think I understand the concept: buffers that allow for the rendering of separate images offscreen. However, I am still unsure how to use them.

Now, the first issue is, finding the right tools in Corona SDK. To my understanding Snapshots and TextureResources are the equivalent to Framebuffers. I do not yet understand the difference between them. What I could find and understand was, that TextureResources are cached in the CPU RAM whereas Snapshots exist only on the GPU. I don’t know yet, how this could be of advantage… back to topic.

I am looking for the most efficient way to solve this problem.

The only ways I can think about are the following:

  1. Somehow rendering not the illuminated part, but the darkened part on top of the light source png. I guess I wouldn’t even need frame buffers for that.

  2. In another tutorial someone iterated over all the pixels of the screen to detect the pixels that are not black, and then rendering the png to these pixels. I guess display.colorSample could help here? How would frame buffers come into play here?

In my final version I would like to blend light sources with different colours as well.

I would not only be happy for solutions for my problem, but also resources to read up on frame buffers and what their most common use cases are. Thanks a lot!