I admit I don’t follow the blog discussion on new api additions and only just found out about the graphics.newTexture.
I’m a bit confused what’s actually the difference between rendering to a snapshot canvas, compared to rendering to a Texture resource obj? Both seem to be capable of achieving the same thing.
If I want a static background that I wish to render objects to in realtime, I would add an image to a snapshot canvas, then objects will be set so they are discarded to save memory, so theoretically you can render endlessly to the canvas without incurring additional memory usage.
Now creating a texture resource involves the same thing, adding an image to it then placing it into a rect (say the same size as the previously mentioned snapshot), then drawing objects into it. Not sure if the objects rendered into it are discarded or remain.
So, as mentioned, kinda confusing as to how texture resource differs to snapshot object. Or am i missing something?