Post-processing examples

Hi.

During some discussion a week or so ago I mentioned that I had a post-processing setup in one of my projects, but that it was sort of tied up in that. I’ve managed since last night to put together a couple examples, so I figure I’ll post them here too.

Basically, you assign whatever groups / objects you want and they get captured to a screen-size canvas, which you then slap onto a rect. So far it’s like a snapshot. But then above that, you also have objects whose shaders read from that same canvas, using their on-screen position to access it. This lets you incorporate the screen contents into generic effects.

In the first example I move a couple objects around, hoping to make it more obvious that the shader isn’t just using the angel image directly. I didn’t do this in the second example, but the same is true there.

The first example makes a bunch of little dummy geometry and rotates them to add some additional distortion (the IQ noise algorithm incorporates the current texture coordinate, so this shakes it up a bit).

The second example uses Corona’s built-in filters. Unfortunately these are a bit limited, since you have to manually compute the texture coordinates (a leaky implementation detail from multi-pass shaders), so without a bit of work it needs to be axis-aligned. But under that constraint it works.

I have a (currently vague) mesh-based idea in mind too, so that might follow.

There are some details with removing the canvas + rect, e.g. if you need to unload a scene. Maybe I’ll add an example with that as well.

related?: imagine if there were a “display.fill.effect” (that functioned as you’d imagine it would -  to access the last stage of the pipeline, say for doing full-screen effects, fe tv scanlines/pincushion etc).  if there were such a thing, then its implementation might? “suggest” to the engineers a corresponding way for “supplying” post-rendered pixels from other containers as inputs to other filters as well. (?)

aside:  have you tried this on mobile, for performance?

+! for  display.fill.effect!

Surely that could be applied to the frame buffer? i.e. after all pipeline processing had finished.

Not that I remotely understand the complexities of doing this.

A quick read of http://www.songho.ca/opengl/gl_pbo.html shows an example of reading the frame buffer, applying a filter (in this case +brightness) and then using that to display.

@davebollinger Alas, no chance to test on mobile yet. My ancient Android tablet seems to be outright dying; I’ve been leery of doing much with it. There is an old iPad or two at my office that I should try at some point, though it’s been hard to sneak the Mac away from others lately. Any device with a low fill rate will choke on it, I’m sure.  :smiley:

I can see the “display.fill.effect” thing working, as this is basically what Unity’s “_GrabTexture” is. That also lets you assign a name, which lets you capture the frame mid-stream, but not necessarily for every object, e.g. in my first sample only the first heat haze particle would trigger the cost, not twenty of them.  :slight_smile:

Sphere Game Studios actually mentioned how it would be handy to toggle post-processing on and off. Given the generality of all this, I’m not sure how much more can be done than a dedicated event listener or two (where you swap the target’s effect to or from a fallback), but I’m open to ideas.

@Sphere Game Studios Great site, which I’ve often consulted. Probably the main concern is that the “*ARB” stuff is non-standard OpenGL, so in theory might not be universally available. They do turn up on my fairly run-of-the-mill laptop, along with a huge number of other extensions:

local extensions = system.getInfo("GL\_EXTENSIONS") for name in extensions:gmatch("[\_%w]+") do -- split the list up so it doesn't print as one huge blob print(name) end

(I don’t know what the names for the mobile equivalents are.) Anyhow, even without hardware assistance it could fall back to something like what I’ve done.

OK on desktop I get shed loads of GL_ARB_* but on Android 7 I get none.

But surely if Unity can do it Corona can?  Open GL is the same for both engines?

Sure, the machinery for snapshots / canvases is already there; these details we’ve been discussing would just be plumbing. An extension like Songho is using simply provides a more direct route, typically with efficiency benefits, but you can fall back if such an approach isn’t available.

the fps increase they get is massive on some of their posts using FBO

display.fill.effect would be really useful.

I’m doing post processing effects through a screen-size snapshot, which works, but I’ve had to do a lot of extra work in order to get coordinates to align, get touch listeners to process when embedded in the snapshot’s group, etc.

I also have an outstanding bug/issue of some objects in nested snapshots disappearing (a few posts down in the forum)

I did my own demo of full-screen filtering and forgot to show it off.

I’m to-ing and fro-ing on what minimum specs are - this *can* run at 60 fps on a Moto G3, but it doesn’t leave time for anything else (and if there’s too much in the BG it clearly can stutter). Should be good to go on better hardware, but then again it should also be noted I’m doing quite a heavy case - multiple layers of parallax along with reading from other screen UV coordinates (generally regarded as a not-fast operation):

APK: https://www.dropbox.com/s/nj14s89te1gp6tj/Filter.apk?dl=0

All I’m doing is drawing into 2 canvas textures and using them as the layers for the filter, which is applied to a full-screen rect display object.

For the right application, this sort of stuff is rather cool, but creating the offset map itself is a complicated enough topic (well, not directly, but making it in a way you can dynamically control / merge effects is).

related?: imagine if there were a “display.fill.effect” (that functioned as you’d imagine it would -  to access the last stage of the pipeline, say for doing full-screen effects, fe tv scanlines/pincushion etc).  if there were such a thing, then its implementation might? “suggest” to the engineers a corresponding way for “supplying” post-rendered pixels from other containers as inputs to other filters as well. (?)

aside:  have you tried this on mobile, for performance?

+! for  display.fill.effect!

Surely that could be applied to the frame buffer? i.e. after all pipeline processing had finished.

Not that I remotely understand the complexities of doing this.

A quick read of http://www.songho.ca/opengl/gl_pbo.html shows an example of reading the frame buffer, applying a filter (in this case +brightness) and then using that to display.

@davebollinger Alas, no chance to test on mobile yet. My ancient Android tablet seems to be outright dying; I’ve been leery of doing much with it. There is an old iPad or two at my office that I should try at some point, though it’s been hard to sneak the Mac away from others lately. Any device with a low fill rate will choke on it, I’m sure.  :smiley:

I can see the “display.fill.effect” thing working, as this is basically what Unity’s “_GrabTexture” is. That also lets you assign a name, which lets you capture the frame mid-stream, but not necessarily for every object, e.g. in my first sample only the first heat haze particle would trigger the cost, not twenty of them.  :slight_smile:

Sphere Game Studios actually mentioned how it would be handy to toggle post-processing on and off. Given the generality of all this, I’m not sure how much more can be done than a dedicated event listener or two (where you swap the target’s effect to or from a fallback), but I’m open to ideas.

@Sphere Game Studios Great site, which I’ve often consulted. Probably the main concern is that the “*ARB” stuff is non-standard OpenGL, so in theory might not be universally available. They do turn up on my fairly run-of-the-mill laptop, along with a huge number of other extensions:

local extensions = system.getInfo("GL\_EXTENSIONS") for name in extensions:gmatch("[\_%w]+") do -- split the list up so it doesn't print as one huge blob print(name) end

(I don’t know what the names for the mobile equivalents are.) Anyhow, even without hardware assistance it could fall back to something like what I’ve done.

OK on desktop I get shed loads of GL_ARB_* but on Android 7 I get none.

But surely if Unity can do it Corona can?  Open GL is the same for both engines?

Sure, the machinery for snapshots / canvases is already there; these details we’ve been discussing would just be plumbing. An extension like Songho is using simply provides a more direct route, typically with efficiency benefits, but you can fall back if such an approach isn’t available.

the fps increase they get is massive on some of their posts using FBO

display.fill.effect would be really useful.

I’m doing post processing effects through a screen-size snapshot, which works, but I’ve had to do a lot of extra work in order to get coordinates to align, get touch listeners to process when embedded in the snapshot’s group, etc.

I also have an outstanding bug/issue of some objects in nested snapshots disappearing (a few posts down in the forum)

I did my own demo of full-screen filtering and forgot to show it off.

I’m to-ing and fro-ing on what minimum specs are - this *can* run at 60 fps on a Moto G3, but it doesn’t leave time for anything else (and if there’s too much in the BG it clearly can stutter). Should be good to go on better hardware, but then again it should also be noted I’m doing quite a heavy case - multiple layers of parallax along with reading from other screen UV coordinates (generally regarded as a not-fast operation):

APK: https://www.dropbox.com/s/nj14s89te1gp6tj/Filter.apk?dl=0

All I’m doing is drawing into 2 canvas textures and using them as the layers for the filter, which is applied to a full-screen rect display object.

For the right application, this sort of stuff is rather cool, but creating the offset map itself is a complicated enough topic (well, not directly, but making it in a way you can dynamically control / merge effects is).