Access to external values / depth value of coordinates? (Passing extra parameters to the shader)

Is the only way to get values into a shader via the properties? (This would likely be alright for something like a full screen water effect as I could just give a Y property that indicates the water height and update it each frame after scrolling), but I’m curious.

Also, in a similar vein, do we have any access to the pseudo 3D coordinates of vertices as corona does them when distorting the path value of an image?

Do you remember my Dungeoneer demo I did? I’m thinking of a shader that automatically fades the walls / floors / etc. to a specified colour based on the vertex ‘distance’. I could, if needs be, store the Z value into the display object myself, but then I’m not sure I’d be able to read it in.

So you should think in terms of whether you’re talking about per-shader, per-vertex, or per-pixel values.

The effect parameters that get passed in are per-shader (i.e. they are the same across all shader operations). Per-vertex values come in based on the incoming geometry (triangles) and the per-pixel values are typically the “varying” variables that are linearly interpolated.

In terms of the 2.5D, Corona does not have a Z, or depth, value. It’s 2D vertices :slight_smile:

Oh, one thing you could do is store distance information in a 2nd texture. 

A similar trick is being played with normals stored per-pixel in a texture, as shown in: https://github.com/mattdesl/lwjgl-basics/wiki/ShaderLesson6

Hi!

First off, this is useful: OpenGL ES 2.0 Reference Card

Assuming you’ve already got a depth, you can find where (or whether) it fits between gl_DepthRange.near and gl_DepthRange.far , and map it onto a 16-bit integer[1].

lowp is exact for each multiple of 1/256 from 0 to 2, inclusive[2]. Presumably this is for use with colors, as suggested by the corresponding entry on the reference card. That actually gives you 256 extra values to work with, but I’m guessing that means within the shader, and it depends upon whatever frame buffer format is in use whether something is legal as an output, so best to keep in the range [0, 1).

Now that you have a 16-bit integer, you can break it into 8-bit chunks, and send those to the red and green channels (don’t scale them!), and blue too if you have 24-bit. This will involve some combination of mod (), fract (), and multiplies by (1.0 / 256.0), and (1.0 / 65536.0) for 24-bit. Also keep in mind that you want a multiple of 1/256 and not the inexact 1/255; the coefficient should be the same, but it’s not a color.

Anyhow, that’s the “depth write” shader, so once you’ve got your texture, you just reverse the process in another shader (where the texture is an input), forming the color components back into an integer and that in turn into your original depth.

On the matter of texture coordinates, this should be easy, at least for triangles and quads, if Corona provides something like CoronaGetUV () in the vertex shader.[3] Then you could pass in barycentric coordinates for each of your vertices as inputs and use these to map the vertex’s uv to a varying, then use that as the uv instead, in the fragment shader.

[1] - 16-bit being as precise as you can assume from a  highp  value. If you’re brave, with enough research and system.getInfo () calls you could probably surmise better capabilities on your hardware and incorporate that into your shaders, e.g. if 24- or 32-bit is available.

You could also get 17-bit by using the sign, too, but it might be a chore to do.  :slight_smile: I’m not sure if the interval is closed, i.e. is 2^16 representable or not? I suspect yes; if this is IEEE-ish floating point, there will be ever-widening gaps from that point onward, thus the floating point ranges.

[2] - In this case obviously inclusive, since 2 is also in the integer range.

[3] - This should be trivial, since the attribute obviously must exist for it to be fed to the fragment shader, but the names of the background variables aren’t exposed to us. This function would be read-only, though another function to assign the uv varying would have its uses, as well.

EDIT : Argh, I realized you don’t really need hardware support if you want 24- or 32-bit, since you could just use two variables–a “high word” and a “low word”–with some extra effort. That said, the general approach remains the same.

So you should think in terms of whether you’re talking about per-shader, per-vertex, or per-pixel values.

The effect parameters that get passed in are per-shader (i.e. they are the same across all shader operations). Per-vertex values come in based on the incoming geometry (triangles) and the per-pixel values are typically the “varying” variables that are linearly interpolated.

In terms of the 2.5D, Corona does not have a Z, or depth, value. It’s 2D vertices :slight_smile:

Oh, one thing you could do is store distance information in a 2nd texture. 

A similar trick is being played with normals stored per-pixel in a texture, as shown in: https://github.com/mattdesl/lwjgl-basics/wiki/ShaderLesson6

Hi!

First off, this is useful: OpenGL ES 2.0 Reference Card

Assuming you’ve already got a depth, you can find where (or whether) it fits between gl_DepthRange.near and gl_DepthRange.far , and map it onto a 16-bit integer[1].

lowp is exact for each multiple of 1/256 from 0 to 2, inclusive[2]. Presumably this is for use with colors, as suggested by the corresponding entry on the reference card. That actually gives you 256 extra values to work with, but I’m guessing that means within the shader, and it depends upon whatever frame buffer format is in use whether something is legal as an output, so best to keep in the range [0, 1).

Now that you have a 16-bit integer, you can break it into 8-bit chunks, and send those to the red and green channels (don’t scale them!), and blue too if you have 24-bit. This will involve some combination of mod (), fract (), and multiplies by (1.0 / 256.0), and (1.0 / 65536.0) for 24-bit. Also keep in mind that you want a multiple of 1/256 and not the inexact 1/255; the coefficient should be the same, but it’s not a color.

Anyhow, that’s the “depth write” shader, so once you’ve got your texture, you just reverse the process in another shader (where the texture is an input), forming the color components back into an integer and that in turn into your original depth.

On the matter of texture coordinates, this should be easy, at least for triangles and quads, if Corona provides something like CoronaGetUV () in the vertex shader.[3] Then you could pass in barycentric coordinates for each of your vertices as inputs and use these to map the vertex’s uv to a varying, then use that as the uv instead, in the fragment shader.

[1] - 16-bit being as precise as you can assume from a  highp  value. If you’re brave, with enough research and system.getInfo () calls you could probably surmise better capabilities on your hardware and incorporate that into your shaders, e.g. if 24- or 32-bit is available.

You could also get 17-bit by using the sign, too, but it might be a chore to do.  :slight_smile: I’m not sure if the interval is closed, i.e. is 2^16 representable or not? I suspect yes; if this is IEEE-ish floating point, there will be ever-widening gaps from that point onward, thus the floating point ranges.

[2] - In this case obviously inclusive, since 2 is also in the integer range.

[3] - This should be trivial, since the attribute obviously must exist for it to be fed to the fragment shader, but the names of the background variables aren’t exposed to us. This function would be read-only, though another function to assign the uv varying would have its uses, as well.

EDIT : Argh, I realized you don’t really need hardware support if you want 24- or 32-bit, since you could just use two variables–a “high word” and a “low word”–with some extra effort. That said, the general approach remains the same.