A few questions

Hi!

(First off, in case it’s useful to anybody, I saw that this just went up: Book of Shaders)

I’ve read over the “Custom Shader Effects Guide” and now I’m going to let it ferment for a while. In the meantime, a few questions.

  • How do masks come into play?

  • Can an effect be undefined? I’m thinking here mainly in terms of tool-generated fare, where you might find a lot of GLES resources piling up if you don’t evict some.

  • Might array-type uniforms, e.g. “vec4[32]” ever be introduced? (With some limit that could be queried, of course.) These would be very handy for porting some old code…

  • Somewhat related to that last one–particularly if the answer is “no”–what about vertex texture fetch? (Or is it already there?)

  • Is the vertex data / uniform data one-or-the-other a final thing? This ties in somewhat to the array-type uniforms question, particularly in certain vertex shaders.

  • Is #include supported?

That’s what I’ve wondered so far. Any answers are much appreciated!  :slight_smile:

- How do masks come into play?

 

Corona’s full shader program applies any mask on your behalf. The mask gets applied after your fragment kernel executes.

 

- Can an effect be undefined? I’m thinking here mainly in terms of tool-generated fare, where you might find a lot of GLES resources piling up if you don’t evict some.

 

Right now we aren’t doing any shader program eviction. That’s something we’re considering in the future. Generally, our shader programs are relatively small so you can have quite a number of programs created.

 

Also, keep in mind that GPU resources are only consumed when they are used by the scene. So even if you define an effect, you have to apply it on a display object that renders to the screen before there’s GPU resources are committed for it.

 

- Might array-type uniforms, e.g. “vec4[32]” ever be introduced? (With some limit that could be queried, of course.) These would be very handy for porting some old code…

 

Not currently, uniforms can be: float, vec2, vec3, vec4, mat3, mat4.

 

So I suppose a mat4 could be a poor-man’s substitute for a vec4[4].

 

- Somewhat related to that last one–particularly if the answer is “no”–what about vertex texture fetch? (Or is it already there?)

 

Not currently. Apparently, Apple did add back support in iOS 7, but seems like it’s a no-go on Android (lots of devices do not support it)

 

What sort of vertex effects are you trying to achieve?

 

- Is the vertex data / uniform data one-or-the-other a final thing? This ties in somewhat to the array-type uniforms question, particularly in certain vertex shaders.

 

Yes, there’s a potential batching performance benefit to having all effect parameters be in vertex data. If you have to go to uniforms, you break the batch. There’s added implementation complexity of allowing simultaneous vertex data and uniform data, so we chose not to do it.

 

- Is #include supported?

 

Not according to the spec (https://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf)

Thanks! I’m going to throw a few more at you.  :slight_smile:

Corona’s full shader program applies any mask on your behalf.

Okay, cool. I assume it’s doing the same beforehand to the vertices coming through the vertex shader? If so, is it possible to recover the pre-transformed position and the matrix, on the GLSL side?

Actually, some of those might be handy as read-only properties on the Lua side, as well, which could later be fed back in as uniforms. I’m thinking, say, of certain motion blur effects that use both the previous and current transform.

What sort of vertex effects are you trying to achieve?

Uff, let’s see if I can even recall everything. :smiley:

First off, the original use case:

Icebreakers-2011-04-27-16-41-09-31-440x2

This was a Tron-style effect, but with curvature, thickness, height, etc. These were Hermite curves, as far as I can recall. Since interpolating the curve (and attributes) requires neighbor information in the vertex shader, most of this state got put (crammed!) into a matrix4 array. Then, some stock mesh (in Corona, I think this would be some reasonably tessellated square polygon) gets instanced, the previous, current, and next states are looked up via a per-instance index, and each of the instance’s vertices are transformed.

That said, in 2D I imagine the bandwidth concerns will be far more sane.  :smiley: In which case, all the state could be loaded into three matrix4 uniforms and the instance index becomes a non-issue.

A couple more…

  • Does this defineEffect replace the multi-pass one or is that still good to go too? Neither is under the graphics.* list in the docs.

  • This next one is a “would be nice”…

Would it be possible to provide setter functions for kernel parameters? Along the lines of, say:

kernel.vertexData = { { name = "\_brightness", -- Name of data in shader default = 0, min = 0, max = 1, index = 0 }, setters = { -- The display object is provided to store / lookup state, etc. brightness = function(effect, value, object) effect.\_brightness = value \* 1.01 end } } -- ... fill.effect.brightness = .3 -- set brightness to .3 \* 1.01 fill.effect.\_brightness = .3 -- set directly

I can think of a few motivations.

The first is just as convenience. For instance, the shader itself might want a pre-computed cosine or sine, but it would be more intuitive to provide an “angle” parameter in the documented interface.

Building on that same example, it would allow setting a couple fields at once, say:

kernel.vertexData = { { name = "\_cosa", default = 1, min = -1, max = 1, index = 0 }, { name = "\_sina", default = 0, min = -1, max = 1, index = 1 }, setters = { angle = function(effect, value, object) value = math.rad(value) effect.\_cosa = math.cos(value) effect.\_sina = math.cos(value) end } -- ... effect.angle = 30 -- set both \_cosa and \_sina

Finally, it can hide away tricks used to cram more data together when parameters are a premium:

kernel.vertexData = { { name = "\_xy", default = 0, min = 0, max = 1024, index = 0 }, -- mediump guarantees integers at least in range [-2^10, +2^10], floats -- slightly wider; two values in [0, 1) can be packed together, then -- later extracted in the shader setters = { x = function(effect, value, object) local y = object.\_y or 0 object.\_x = x -- keep local copy effect.\_xy = floor(x \* 1024) + y end, y = function(effect, value, object) local x = object.\_x or 0 object.\_y = y -- keep local copy effect.\_xy = floor(x \* 1024) + y end } -- ... effect.x = .4 -- updates x part, then repacks as \_xy

Sorry for the novel!

For the vertex kernel, the position you get is in the Corona coordinate system (i.e. “content” coordinates). The MVP matrix is then applied to the position the kernel returns which is then fed to gl_Position.

The API is overloaded in the sense that there are multiple ways to define an effect. One is via GLSL code. The other is multi-pass — we haven’t been emphasizing the multi-pass as much b/c we’ve seen varying performance due to devices not handling render-to-texture very well.

The setter is a nice idea. It’d be nice if there were a mechanism in the Lua language so we get this sort of thing for free.

Thanks again! I’ll get to coding soon.  :slight_smile:

I realized, after having posted, that the setter style I used isn’t so nice for transitions. Originally, I just had the new value being returned, before realizing one could assign multiple values by doing it a different way. But I suppose that could still be done (with the other “internal” parameters) while returning the interpolation-friendly “public” value.

Anyhow, a few questions about multi-pass shaders have occurred to me.

  • Do the new effects inherit the parameters of the passes, or are they stuck with the defaults? And if the former, what happens when there’s a name conflict? In the tutorial’s example, for instance, the two blur passes’ parameters share the same names.

  • Is it safe to assume that multi-pass breaks batching? I’m thinking, for instance, if I make shaders meant exclusively to be used as shader passes, can I just avoid the hassle of trying to cram everything into four floats and go to with the more generous allotment of uniforms?

  • When an input comes from a previous pass, can it be treated like a makeshift snapshot, e.g. to play with pixels from the background?

  • If so, could the first pass be spoofed, say by providing a vertex shader that just sent all vertices way outside the content: 

    P_POSITION vec2 VertexKernel( P_POSITION vec2 position ) { position.x = -10000; return position; }

and still get that capture (without wasting as much fill)? Or is there logic to try to detect “degenerates” and early-out? (Assuming there’s even a reasonable / efficient way to do that test!)

(I can obviously try these out. It would just be nice to have them spelled out, if possible.)

  • On the earlier subject of uniforms and batching, was a UniformsGroup or AttributesGroup concept ever discussed, along the lines of the old ImageGroup? With some constraints, it seems like that could work. (I totally overthought this idea while out running tonight. :stuck_out_tongue: ) Maybe too much hassle, though.

That’s all for now!

Yes, multipass shaders breaks batching, as would any change in shader program.

Also, they involve intermediate render to textures that get fed to the next shader in the graph. So if you want solid perf on a cross-platform basis, I wouldn’t rely on them heavily.

Good to know, thanks.

I did have a go with these, and in the “makeshift snapshot” case it didn’t look like they’ll do what I was hoping (pick up the destination and feed it into the second pass), or if they do, it’s still probably a broken approach without being able to switch blend modes in the middle. (If I may cite your competitors for a moment, I’m looking for something like Unity’s “GrabPass”.) Anyhow, I now join the chorus along with rakoonic and some others for a snapshot paint.  :slight_smile: (That said, putting an effect on a snapshot worked like a charm!)

I realized after posting that parameters were explained further down in the multi-pass tutorial. Whoops!

One issue that did come up: Is it a no-op to redefine a graphics effect with the same (full) kernel name? It didn’t seem to cause any problems, but then I haven’t tried switching the kernel out underneath it, between calls (although that’s more of a “your own fault” error). The practical situation where this arises is where a multi-pass effect references a non-builtin shader, which can also be used on its own, so it’s not necessarily clear if the pass in questions has already been loaded. Now, a polite user is probably going to call graphics.defineEffect () immediately after require ()'ing the kernel, in which case  package.loaded  could be checked first, but we all have our bad days.  :slight_smile:

Yes, it’s a no-op to redefine a graphics effect. It’s definitely a good idea to have some simple prefix to namespace your kernels.

Okay, great. In that case it sounds not unreasonable to define the effect inside the kernel itself, while being  require ()'d.

Are there any snippets handy that demonstrate uniform userdata? I was all set to play with that and don’t even know how to get one started.  :D Not full-on documentation, really; even just a single definition and usage, respectively.

Lastly, on one of my machines, certain shaders (specifically, some of my time-dependent ones) don’t work. Either the shader doesn’t even seem to be applied or it gets at least halfway (the correct pixels are being discarded, for example), but the time doesn’t seem to be updating, plus the texturing is all off. And  yet, the device build works fine, as does the simulator on another machine. No errors are being shown in the console.

This machine was working until just a couple days back, and in fact most shaders (including a different time-dependent one) still work fine. The only peculiar circumstances I can come up with were (1 - doing a Git pull of the stuff I wrote on the other machine (though I do this frequently, without issues), (2 - a small Windows Update, (3 - installing the Android SDK a while beforehand. Rebooting; trying with a copy of the project; uninstalling and reinstalling Corona (both same version and newer daily build); and finally making a new duplicate effect with a different kernel name, have all turned up the same result.  :frowning:

I’d file a bug, but I’m at a loss here… Is there any sort of caching mechanism being used by the shader compiler, or something along those lines? I’m stumped.

No docs on uniform data yet. We want to clean up a few things there before we expose it.

That’s weird about your machine. Sounds like your windows update is the culprit. There can be minor GL driver differences that break things in subtle ways.

No luck on the machine-specific issue, alas.  :frowning:

In one of these topics I mentioned something about a GetUV () function, then soon discovered CoronaTexCoord. However, there’s not much information on it.

I was expecting that, with a rectangular display object, this would yield [0, 0], [1, 0], [0, 1], [1, 1], as per the uv parameter in the fragment kernel. However, this doesn’t seem to accord with what I’m seeing, if I’m modifying the x1 , x2 , et al. of the object in question’s rect path. It almost seemed the values might be in the [-1, +1] range, but THAT wasn’t right either.  :smiley:

Anyhow, my fragment uv values seem to be ever so slightly off, perhaps related to the above. Is there some perspective correction going on, or something to that effect, when the path is modified?

This is in the context of drawing arbitrary quads with user-provided uv values per corner.

If I pass in some reference content positions to the shader and forgo the fragment shader’s uv altogether, I can texture such quads flawlessly, with the huge proviso that it isn’t rotated. I’ve got barycentric coordinate and bilinear filtering variants to circumvent that, but in these I’m running into the above-mentioned uv-weirdness. Any insight is appreciated!

In regard to some earlier things, I’ve now got pretty decent setter- and #include-ish features working.

The UV’s are standard if you have a true perpendicular rectangle.

If you modify the corner offsets to distort the rectangle into an arbitrary quad, then the UV’s in the fragment kernel will change such that the texture is sampled with perspective correction.

Okay, great. I guess that explains why “distorted” rectangles looked okay.

Does this mean CoronaTexCoord is pre-multiplied by “w”? And then can one recover the “1 / w”, say in the fragment shader? (I assume it’s not automatically applied to user-defined varyings.) I guess if it is just a scale I could try to account for it via the components that would typically be 1…

To follow up on that, for quads it seems like I can get away with a min(CoronaTexCoord, 1.), working on the assumption that the uv will be either 0 or w, with w >= 1). This will fall apart with more general polygons, of course.  :slight_smile:

I’ve been holding off on feature requests, but now that this stuff has been opened up, I’m going to try writing one up over on the Feature Requests / Feedback page.

I don’t strictly speaking need the following info, but it would make for a tighter proposal…

Is uniform userdata duplicated between the vertex and fragment namespaces, or are both available for use?

Does Corona switch “modes” when writing a vertex userdata-based effect versus uniform userdata-based (versus userdata-free) ones? I assume vertex userdata must be an attribute, which would be dead weight for a uniform userdata effect. But maybe it’s easier on Corona’s end to not restructure the renderer logic and just ignore it? (I ask because it might suggest feature implementation hints / techniques, not to try to hijack undefined behavior.  :smiley: )

Anyhow, if I don’t hear anything, I’ll just have a lot of “If it works like this…” in the proposal.  :slight_smile:

- How do masks come into play?

 

Corona’s full shader program applies any mask on your behalf. The mask gets applied after your fragment kernel executes.

 

- Can an effect be undefined? I’m thinking here mainly in terms of tool-generated fare, where you might find a lot of GLES resources piling up if you don’t evict some.

 

Right now we aren’t doing any shader program eviction. That’s something we’re considering in the future. Generally, our shader programs are relatively small so you can have quite a number of programs created.

 

Also, keep in mind that GPU resources are only consumed when they are used by the scene. So even if you define an effect, you have to apply it on a display object that renders to the screen before there’s GPU resources are committed for it.

 

- Might array-type uniforms, e.g. “vec4[32]” ever be introduced? (With some limit that could be queried, of course.) These would be very handy for porting some old code…

 

Not currently, uniforms can be: float, vec2, vec3, vec4, mat3, mat4.

 

So I suppose a mat4 could be a poor-man’s substitute for a vec4[4].

 

- Somewhat related to that last one–particularly if the answer is “no”–what about vertex texture fetch? (Or is it already there?)

 

Not currently. Apparently, Apple did add back support in iOS 7, but seems like it’s a no-go on Android (lots of devices do not support it)

 

What sort of vertex effects are you trying to achieve?

 

- Is the vertex data / uniform data one-or-the-other a final thing? This ties in somewhat to the array-type uniforms question, particularly in certain vertex shaders.

 

Yes, there’s a potential batching performance benefit to having all effect parameters be in vertex data. If you have to go to uniforms, you break the batch. There’s added implementation complexity of allowing simultaneous vertex data and uniform data, so we chose not to do it.

 

- Is #include supported?

 

Not according to the spec (https://www.khronos.org/registry/gles/specs/2.0/GLSL_ES_Specification_1.0.17.pdf)

Thanks! I’m going to throw a few more at you.  :slight_smile:

Corona’s full shader program applies any mask on your behalf.

Okay, cool. I assume it’s doing the same beforehand to the vertices coming through the vertex shader? If so, is it possible to recover the pre-transformed position and the matrix, on the GLSL side?

Actually, some of those might be handy as read-only properties on the Lua side, as well, which could later be fed back in as uniforms. I’m thinking, say, of certain motion blur effects that use both the previous and current transform.

What sort of vertex effects are you trying to achieve?

Uff, let’s see if I can even recall everything. :smiley:

First off, the original use case:

Icebreakers-2011-04-27-16-41-09-31-440x2

This was a Tron-style effect, but with curvature, thickness, height, etc. These were Hermite curves, as far as I can recall. Since interpolating the curve (and attributes) requires neighbor information in the vertex shader, most of this state got put (crammed!) into a matrix4 array. Then, some stock mesh (in Corona, I think this would be some reasonably tessellated square polygon) gets instanced, the previous, current, and next states are looked up via a per-instance index, and each of the instance’s vertices are transformed.

That said, in 2D I imagine the bandwidth concerns will be far more sane.  :smiley: In which case, all the state could be loaded into three matrix4 uniforms and the instance index becomes a non-issue.

A couple more…

  • Does this defineEffect replace the multi-pass one or is that still good to go too? Neither is under the graphics.* list in the docs.

  • This next one is a “would be nice”…

Would it be possible to provide setter functions for kernel parameters? Along the lines of, say:

kernel.vertexData = { { name = "\_brightness", -- Name of data in shader default = 0, min = 0, max = 1, index = 0 }, setters = { -- The display object is provided to store / lookup state, etc. brightness = function(effect, value, object) effect.\_brightness = value \* 1.01 end } } -- ... fill.effect.brightness = .3 -- set brightness to .3 \* 1.01 fill.effect.\_brightness = .3 -- set directly

I can think of a few motivations.

The first is just as convenience. For instance, the shader itself might want a pre-computed cosine or sine, but it would be more intuitive to provide an “angle” parameter in the documented interface.

Building on that same example, it would allow setting a couple fields at once, say:

kernel.vertexData = { { name = "\_cosa", default = 1, min = -1, max = 1, index = 0 }, { name = "\_sina", default = 0, min = -1, max = 1, index = 1 }, setters = { angle = function(effect, value, object) value = math.rad(value) effect.\_cosa = math.cos(value) effect.\_sina = math.cos(value) end } -- ... effect.angle = 30 -- set both \_cosa and \_sina

Finally, it can hide away tricks used to cram more data together when parameters are a premium:

kernel.vertexData = { { name = "\_xy", default = 0, min = 0, max = 1024, index = 0 }, -- mediump guarantees integers at least in range [-2^10, +2^10], floats -- slightly wider; two values in [0, 1) can be packed together, then -- later extracted in the shader setters = { x = function(effect, value, object) local y = object.\_y or 0 object.\_x = x -- keep local copy effect.\_xy = floor(x \* 1024) + y end, y = function(effect, value, object) local x = object.\_x or 0 object.\_y = y -- keep local copy effect.\_xy = floor(x \* 1024) + y end } -- ... effect.x = .4 -- updates x part, then repacks as \_xy

Sorry for the novel!

For the vertex kernel, the position you get is in the Corona coordinate system (i.e. “content” coordinates). The MVP matrix is then applied to the position the kernel returns which is then fed to gl_Position.

The API is overloaded in the sense that there are multiple ways to define an effect. One is via GLSL code. The other is multi-pass — we haven’t been emphasizing the multi-pass as much b/c we’ve seen varying performance due to devices not handling render-to-texture very well.

The setter is a nice idea. It’d be nice if there were a mechanism in the Lua language so we get this sort of thing for free.

Thanks again! I’ll get to coding soon.  :slight_smile: