Composite effects + using 2 snapshots as CoronaSampler0 and CoronaSampler1

I’m still not fully up on why composite filters need the paint thing to specify the 2 layers, as one would assume you’d want the first image / layer to be whatever you applied it to (Ie what happens to a normal filter but without the second layer).

I haven’t managed to get a test composite filter working, can anyone show me one, doesn’t matter how simple it is, I just need to get the shader code and how to apply it to something.

And then, do ‘paint’ layers allow for anything other than an image with a filename as a valid parameter? I’m desperate to have 2 snapshots being used for a composite because the second snapshot could then be used as a dynamic mask (not just for alpha channels - I have all kinds of ideas for this).

I can’t even see how to use a snapshot for the first layer of a composite paint, so it is actually less useful than just a straight filter which at least can work on dynamic images.

Halp!

Barry

I’m about to fall asleep, so apologies for any incoherence…

In case it will do any good, I’ll join the chorus for snapshot paints. So far as I can tell, they aren’t there. :(  Those would almost automatically give image sheets (I know you’ve mentioned this) and, I think, pretty quickly open up depth buffers, “data” textures, etc. As is, those are all rather useless when they aren’t paired off.

If it’s just unfamiliarity (and not weird bugs) you’re up against, I think I could whip up a composite, once awake and alert. (I still need to check out your project, too!)

As of the most recent daily builds, it looks like image sheets are also viable paints.

I got a composite working - frankly the naming convention is what threw me off I thought they were set up some special way, but apart from having to specify the .fill of the object you attach it too, nothing changes.

This really seems bizarre that composites are deliberately crippled compared to filters, and more so that they don’t support proper multitexturing with snapshots (which would give us access to so much dynamic stuff it isn’t true).

I mean, a filter can be applied to a snapshot (ie CoronaSampler0 is a snapshot) so we know this much is possible (except… not in a composite filter, errr?). So all we need is a way of setting CoronaSampler1 to a snapshot as well. To be honest it would make more sense to *not* treat this as a composite, just a specific way of setting the sampler1. So something like this would be super easy to use (with the CoronaSampler1 property being whatever you think appropriate):

[lua]local kernel                  = require( “gfx.shaders.filter_barry_magic” )

graphics.defineEffect( kernel )

mySnapshot1.fill.CoronaSampler1 = mySnapshot2

mySnapshot1.fill.effect         = “filter.barry.magic”[/lua]I can think of so many awesome effects this would make possible, and that’s without even thinking about dynamic alphas being possible…

Of course the alternative or ‘current’ way of doing this would be something like:

[lua]local paint = {

   type   = “composite”,

   paint1 = { type=“snapshot”, snapshot=mySnapshot1 },

   paint2 = { type=“snapshot”, snapshot=mySnapshot2 }

}[/lua]where it probably wouldn’t hurt to also allow either paint1 or paint2 to have a type=“baseObject” which specifies that this channel uses the display object the composite filter is attached to as that layer. But really the whole composite thing is a red herring I feel, because it has been directly tied into a fill value which is actually not essential. I’d prefer just a specific way of adding a display object as CoronaSampler1.

Newly filled with coffee, I feel ready to offer misguided opinions once more.

What sort of things are you dynamically masking? I do have an interest in that sort of topic. (I asked a while back about doing “mask sheets” but didn’t get any info; now that we have more control this may be worth revisiting.)

To expound a bit on “data” textures… These would be textures, yes, but the numbers being read out wouldn’t be interpreted as color. Snapshots, usually with “nearest” filtering on, could be used, where on the game side you just put, for example, 1-pixel rects with information encoded as color, with “position” mapped to their lookup entry. In the “Share Your Shaders” thread I mentioned sound samples (where the texture would probably be a ring buffer with a current position) as an example. Other things I have in mind are octree textures (or quadtree, perhaps) and HistoPyramids (also here and here).

On to speculation!

I suspect snapshots are running up against some practical considerations, either actual hard limits (at least with portability in mind) or awkward situations where some policy would need to be established, but hasn’t been for any number of reasons.

An object with a two-snapshot composite paint, perhaps being added to a snapshot itself, is already using lots of render textures, which may be a scarce resource.

Having played with multi-pass shaders, it looks like they don’t pick up their background (which seems the sanest, when all is said and done), so there’s probably another render texture in there, and I would guess it’s screen-sized. Furthermore, if you have more than two passes, you’re reading from the previous output and writing new values. It stands to reason that these would be different, or you’d be getting feedback effects (maybe OpenGL itself enforces some policy here?). After two passes the textures can probably be ping-ponged. But anyhow, multi-pass is supposed to work with composite paints (the tutorial a while back explicitly mentioned them), so it looks like five render textures need to be available.

That feedback issue from multi-pass rears its head again, though, if you’re reading from and writing to the same snapshot (which would be a perfectly useful case). At some point, a copy needs to be made, especially when multiple objects are reading the snapshot.

Cycles don’t really seem to be a problem, so long as the inputs are understood to be “lagged” snapshots. I can’t think of a sane way to read unlagged ones and maintain Corona’s display hierarchy ordering simultaneously. Some headway could probably be made for post-processing, etc., by allowing extra stages (I always interpreted getCurrentStage () in the “the world’s a stage” sense, but I suppose “stage 1”, stage 2", and so on fit), but that opens up another can of worms!  :smiley:

So do fragment shaders actually change the image they are on? I thought this was a purely rendering pass, so it didn’t touch the source, so no feedback really is possible. But then again, as fragments work on a pixel by pixel basis and we have no way of *writing* to anything other than the current pixel I’m not sure feedback would be a problem.

But as for rendering passes, well naturally if you do some heavy duty stuff and it runs slow, it is only your fault :slight_smile:

Here’s an example of where 2 snapshots would be cool, and using the second snapshot as a ‘mask’ (although we are not explicitly talking about alpha channels - to be honest all my filters / composites would require a solid snapshot or things can go odd regardless).

I have my 2D platformer and I want water effects. But instead of the filter I currently have, which is simply a horizontal water level, where above a certain point it is air and below water, I want tiles to be able to be water or not, so you get pools of water dotted around the level and they aren’t connected.

An obvious way to do this with 2 versions of the tileset, and 2 renderings of the level. 1 is the normal visual representation of the level, and the other is a ‘water mask’, where tiles are normally drawn black, but any area within a tile you want (visually) water, you draw in white, so it actually becomes a per-pixel value.

Then, every frame, you draw the level into the 2 snapshots, with the second one matching the first except it is purely black apart from where water is. Then the composite will look into the second snapshot and if a pixel is white, it will manipulate the visual part accordingly, with a rippling distortion.

Essentially the second snapshot would be a dynamic glorified greenscreen (although naturally the colour is irrelevant, and you could simultaneously have various ‘greenscreens’ (using the RGB and possibly A channels individually).

Twould be awesome!

I still don’t see the problem of just adding the second snapshot as a parameter to a normal filter that lets you access it with CoronaSamplar1, but I’m not in the know so I don’t want to second guess too much.

As for snapshots being lagged or rendered out of order, I’ve already suggested an easy fix for this a long time ago (and several times since), you just give each snapshot a .renderPriority property, so you can manually choose in what order they are refreshed within a given draw cycle. This would also enable us to eliminate problems with nested snapshots, where the further down the heirarchy you go the more lagged each snapshot becomes.

As far as feedback goes, I’m only referring to when the texture is both an input and output simultaneously. I don’t know if OpenGL does anything to account for this (it seems it would have to reserve double the memory, if so) or just forbids it outright or something else, but even if it worked it seems the change would be pervasive, so other objects still reading it as input would get an unwelcome surprise.  :slight_smile:

Regarding multiple passes, it was less “this might be slow” than it introducing a corner case, within the confines of a composite paints-compatible API, where correctness would actually break down if the implementation couldn’t bear that many render textures (I forget what the relevant value is; possibly GL_MAX_COLOR_ATTACHMENTS ). It may turn out that all reasonable hardware could accommodate it just fine. Or it might suffice to simply warn about these trouble spots.

The render priority is an interesting approach. I guess it would entail a pre-pass over the hierarchy (which in theory the renderer could take advantage of in other ways) and should still preserve in-priority order. If there were snapshot cycles, would that be an “It’s your own fault?” or would there be some tie-breaking? (Even just “prefer the one earlier in the list”.)

And finally, I agree that the water would be awesome.

I don’t think it would be hard for corona to maintain a dedicated list of snapshots, and just run through that. I think if you specified 2 snapshots with the same priority value (I recommend an open ended system, just using integers or summat) then really if it doesn’t do what you want (eg order undefined) then you’ve only got yourself to blame for intentionally breaking the system designed to prevent the issue. But most likely would just be first-defined.

After a little investigation, I have another suspicion, based on the opening paragraph here:

“They are optimized for use as render targets, while Textures may not be, and are the logical choice when you do not need to sample (i.e. in a post-pass shader) from the produced image. If you need to resample (such as when reading depth back in a second shader pass), use Textures instead.” (emphasis in original)

This is consistent with being able to apply an effect directly on a snapshot (“Oh look, there’s an effect. Texture it is!”) versus feeding them in as a paint (assuming we still wanted the renderbuffer efficiency whenever possible, there would need to be some ongoing coordination; off-hand, update a reference count when assigning fills?).

That said, it’s hard to reason about code you’ve never seen. :D For all I know, there may be no real obstacle, but there just hasn’t been enough pressure. It would be great to hear from somebody in the know.

That feature would be really cool! I want it.

Sadly it was the other thread that just got featured in the Corona mailing. Apparently we are invisible here :smiley:

I’m about to fall asleep, so apologies for any incoherence…

In case it will do any good, I’ll join the chorus for snapshot paints. So far as I can tell, they aren’t there. :(  Those would almost automatically give image sheets (I know you’ve mentioned this) and, I think, pretty quickly open up depth buffers, “data” textures, etc. As is, those are all rather useless when they aren’t paired off.

If it’s just unfamiliarity (and not weird bugs) you’re up against, I think I could whip up a composite, once awake and alert. (I still need to check out your project, too!)

As of the most recent daily builds, it looks like image sheets are also viable paints.

I got a composite working - frankly the naming convention is what threw me off I thought they were set up some special way, but apart from having to specify the .fill of the object you attach it too, nothing changes.

This really seems bizarre that composites are deliberately crippled compared to filters, and more so that they don’t support proper multitexturing with snapshots (which would give us access to so much dynamic stuff it isn’t true).

I mean, a filter can be applied to a snapshot (ie CoronaSampler0 is a snapshot) so we know this much is possible (except… not in a composite filter, errr?). So all we need is a way of setting CoronaSampler1 to a snapshot as well. To be honest it would make more sense to *not* treat this as a composite, just a specific way of setting the sampler1. So something like this would be super easy to use (with the CoronaSampler1 property being whatever you think appropriate):

[lua]local kernel                  = require( “gfx.shaders.filter_barry_magic” )

graphics.defineEffect( kernel )

mySnapshot1.fill.CoronaSampler1 = mySnapshot2

mySnapshot1.fill.effect         = “filter.barry.magic”[/lua]I can think of so many awesome effects this would make possible, and that’s without even thinking about dynamic alphas being possible…

Of course the alternative or ‘current’ way of doing this would be something like:

[lua]local paint = {

   type   = “composite”,

   paint1 = { type=“snapshot”, snapshot=mySnapshot1 },

   paint2 = { type=“snapshot”, snapshot=mySnapshot2 }

}[/lua]where it probably wouldn’t hurt to also allow either paint1 or paint2 to have a type=“baseObject” which specifies that this channel uses the display object the composite filter is attached to as that layer. But really the whole composite thing is a red herring I feel, because it has been directly tied into a fill value which is actually not essential. I’d prefer just a specific way of adding a display object as CoronaSampler1.

Newly filled with coffee, I feel ready to offer misguided opinions once more.

What sort of things are you dynamically masking? I do have an interest in that sort of topic. (I asked a while back about doing “mask sheets” but didn’t get any info; now that we have more control this may be worth revisiting.)

To expound a bit on “data” textures… These would be textures, yes, but the numbers being read out wouldn’t be interpreted as color. Snapshots, usually with “nearest” filtering on, could be used, where on the game side you just put, for example, 1-pixel rects with information encoded as color, with “position” mapped to their lookup entry. In the “Share Your Shaders” thread I mentioned sound samples (where the texture would probably be a ring buffer with a current position) as an example. Other things I have in mind are octree textures (or quadtree, perhaps) and HistoPyramids (also here and here).

On to speculation!

I suspect snapshots are running up against some practical considerations, either actual hard limits (at least with portability in mind) or awkward situations where some policy would need to be established, but hasn’t been for any number of reasons.

An object with a two-snapshot composite paint, perhaps being added to a snapshot itself, is already using lots of render textures, which may be a scarce resource.

Having played with multi-pass shaders, it looks like they don’t pick up their background (which seems the sanest, when all is said and done), so there’s probably another render texture in there, and I would guess it’s screen-sized. Furthermore, if you have more than two passes, you’re reading from the previous output and writing new values. It stands to reason that these would be different, or you’d be getting feedback effects (maybe OpenGL itself enforces some policy here?). After two passes the textures can probably be ping-ponged. But anyhow, multi-pass is supposed to work with composite paints (the tutorial a while back explicitly mentioned them), so it looks like five render textures need to be available.

That feedback issue from multi-pass rears its head again, though, if you’re reading from and writing to the same snapshot (which would be a perfectly useful case). At some point, a copy needs to be made, especially when multiple objects are reading the snapshot.

Cycles don’t really seem to be a problem, so long as the inputs are understood to be “lagged” snapshots. I can’t think of a sane way to read unlagged ones and maintain Corona’s display hierarchy ordering simultaneously. Some headway could probably be made for post-processing, etc., by allowing extra stages (I always interpreted getCurrentStage () in the “the world’s a stage” sense, but I suppose “stage 1”, stage 2", and so on fit), but that opens up another can of worms!  :smiley:

So do fragment shaders actually change the image they are on? I thought this was a purely rendering pass, so it didn’t touch the source, so no feedback really is possible. But then again, as fragments work on a pixel by pixel basis and we have no way of *writing* to anything other than the current pixel I’m not sure feedback would be a problem.

But as for rendering passes, well naturally if you do some heavy duty stuff and it runs slow, it is only your fault :slight_smile:

Here’s an example of where 2 snapshots would be cool, and using the second snapshot as a ‘mask’ (although we are not explicitly talking about alpha channels - to be honest all my filters / composites would require a solid snapshot or things can go odd regardless).

I have my 2D platformer and I want water effects. But instead of the filter I currently have, which is simply a horizontal water level, where above a certain point it is air and below water, I want tiles to be able to be water or not, so you get pools of water dotted around the level and they aren’t connected.

An obvious way to do this with 2 versions of the tileset, and 2 renderings of the level. 1 is the normal visual representation of the level, and the other is a ‘water mask’, where tiles are normally drawn black, but any area within a tile you want (visually) water, you draw in white, so it actually becomes a per-pixel value.

Then, every frame, you draw the level into the 2 snapshots, with the second one matching the first except it is purely black apart from where water is. Then the composite will look into the second snapshot and if a pixel is white, it will manipulate the visual part accordingly, with a rippling distortion.

Essentially the second snapshot would be a dynamic glorified greenscreen (although naturally the colour is irrelevant, and you could simultaneously have various ‘greenscreens’ (using the RGB and possibly A channels individually).

Twould be awesome!

I still don’t see the problem of just adding the second snapshot as a parameter to a normal filter that lets you access it with CoronaSamplar1, but I’m not in the know so I don’t want to second guess too much.

As for snapshots being lagged or rendered out of order, I’ve already suggested an easy fix for this a long time ago (and several times since), you just give each snapshot a .renderPriority property, so you can manually choose in what order they are refreshed within a given draw cycle. This would also enable us to eliminate problems with nested snapshots, where the further down the heirarchy you go the more lagged each snapshot becomes.

As far as feedback goes, I’m only referring to when the texture is both an input and output simultaneously. I don’t know if OpenGL does anything to account for this (it seems it would have to reserve double the memory, if so) or just forbids it outright or something else, but even if it worked it seems the change would be pervasive, so other objects still reading it as input would get an unwelcome surprise.  :slight_smile:

Regarding multiple passes, it was less “this might be slow” than it introducing a corner case, within the confines of a composite paints-compatible API, where correctness would actually break down if the implementation couldn’t bear that many render textures (I forget what the relevant value is; possibly GL_MAX_COLOR_ATTACHMENTS ). It may turn out that all reasonable hardware could accommodate it just fine. Or it might suffice to simply warn about these trouble spots.

The render priority is an interesting approach. I guess it would entail a pre-pass over the hierarchy (which in theory the renderer could take advantage of in other ways) and should still preserve in-priority order. If there were snapshot cycles, would that be an “It’s your own fault?” or would there be some tie-breaking? (Even just “prefer the one earlier in the list”.)

And finally, I agree that the water would be awesome.

I don’t think it would be hard for corona to maintain a dedicated list of snapshots, and just run through that. I think if you specified 2 snapshots with the same priority value (I recommend an open ended system, just using integers or summat) then really if it doesn’t do what you want (eg order undefined) then you’ve only got yourself to blame for intentionally breaking the system designed to prevent the issue. But most likely would just be first-defined.

After a little investigation, I have another suspicion, based on the opening paragraph here:

“They are optimized for use as render targets, while Textures may not be, and are the logical choice when you do not need to sample (i.e. in a post-pass shader) from the produced image. If you need to resample (such as when reading depth back in a second shader pass), use Textures instead.” (emphasis in original)

This is consistent with being able to apply an effect directly on a snapshot (“Oh look, there’s an effect. Texture it is!”) versus feeding them in as a paint (assuming we still wanted the renderbuffer efficiency whenever possible, there would need to be some ongoing coordination; off-hand, update a reference count when assigning fills?).

That said, it’s hard to reason about code you’ve never seen. :D For all I know, there may be no real obstacle, but there just hasn’t been enough pressure. It would be great to hear from somebody in the know.

That feature would be really cool! I want it.