Hi, That’s a nice alternative Rob, thank. I will check its performance (due to the amount of images).
BTW, we already use the display.save function and it badly misses an onComplete event, which could streamline development.
Hi, That’s a nice alternative Rob, thank. I will check its performance (due to the amount of images).
BTW, we already use the display.save function and it badly misses an onComplete event, which could streamline development.
What you’re seeing is probably slightly more subtle than the dimensions being matched up; rather I suspect it’s that the same coordinates are used for both textures, as warned about in the limitations section here, so one of them just gets looked up… somewhere.
Which composite effects are you using? Are the frames themselves the same size? If not, should the images completely overlap, or is the larger one meant to be partially uncovered?
@ Rob , @ Brent , anybody else
Am I overlooking any way to get the “texture rect” (either the final uv coordinates or the texture coordinates as specified in the image sheet) of the current frame? (From the image sheet and sequence data themselves, I guess, though it’s a bit annoying to carry those around.) I’ve been running up against this lately with image sheet-based shaders of my own, and it’s holding me back from a suggestion or two here. I could put up a request later, if not.
Yes, I’ve read that note and I suspect its a side effect of using the same coord.
It does not matter which effect I’m using, this happens on all of them.
Currently I use “composite.softLight”, but it happens also with normalMapWith1PointLight, overaly and few others I checked.
The larger image is meant to cover the entire screen while the smaller one is where the user activity happens, so yes.
The frames themselves are not the same size - meaning frame 1 is not the same size as frame 2. But frame 1 in image 1 is the same size as frame 1 on image 2 as they are meant to overlay each other. However, frame 1 coordinates on image 1 are offset by certain x and y values to allow it to overlay exactly frame 1 on image 2, so the coordinates are different (which is probably the root of the issue).
To make it even trickier, the coordinates will be normalized to the range [0, 1] by the time they get used!
I’m… not entirely sure which option you were confirming with your “yes”.
Actually, the “larger image” is just throwing me a bit. Is the whole picture already down, but made up of tiles, and individual tiles just get “lit up” or something along those lines?
I asked about the effect since if it’s a simple enough one it could just be rewritten to deal with the situation (anything like normal maps where you assign a table to a property is still using undocumented features, unfortunately). It looks like a lot of the formulae, or at least something like them, are publicly available, e.g. here.
No, its not a tile map. Its a simple overlay of two images, simply one of them is smaller than the other.
Okay, I’m just having a hard time trying to reconcile “The larger image is meant to cover the entire screen” with “But frame 1 in image 1 is the same size as frame 1 on image 2 as they are meant to overlay each other.”
If you could provide a mock-up image or something that spells it all out, that would be awesome.
Actually, would it be possible to get a little code snippet? (If you don’t want to give out your assets you could just put smiley faces or whatever on them, so long as the texture coordinates pointed at something meaningful.)
Quite simple actually but I can’t upload image from the PC here.
Whip up a test app, screenshot, upload to imgur, post a link?
all that to show an image? i’m sure we could have used some better editing tool here instead…
I’ve sometimes had to mess with the image link (I think Dropbox required something like “&web=1” at the end, for instance).
Anyhow, I’ll lay out what I had in mind. Maybe you can come up with something from there.
First off, all but one of the built-in composite effects use fewer than four parameters, and even that one wouldn’t be hard to adjust:
So an equivalent effect should be able to add a couple more without a problem.
Then, you’d have (most of) your custom effect:
local kernel = { category = "filter", name = "my\_custom\_effect" } kernel.vertexData = { { name = "alpha", -- as per most of the composite effects default = 1, min = 0, max = 1, index = 0 }, { name = "u", -- a texture coordinate... default = 0, min = 0, max = 1, index = 1 }, { name = "v", -- ...the other one default = 0, min = 0, max = 1, index = 2 } -- still another slot left for whatever } kernel.fragment = [[P\_COLOR vec4 FragmentKernel (P\_UV vec2 uv) { P\_COLOR vec4 a = texture2D(CoronaSampler0, uv); // using the normal texcoords P\_COLOR vec4 b = texture2D(CoronaSampler1, CoronaVertexUserData.yz); // our pair // Fill in the rest! (e.g. code from the Wikipedia link earlier) }]] graphics.defineEffect(kernel)
To get the second set of coordinates you’d do something like the following, for each object:
object.fill.effect = "filter.custom.my\_custom\_effect" local frame\_data = OverFrames[object.frame] object.fill.effect.u = frame\_data.x / frame\_data.width -- in [0, 1] object.fill.effect.v = frame\_data.y / frame\_data.height -- ^^ Adjust to whichever data format you're using for frame data, of course
It would be handy if some of the code for the built-in effects were released, but as the earlier link showed, many of them can be found in some form elsewhere.
EDIT : Argh, I realized those texture coordinates are constant. Hmm, this is tricky, since it looks like we need a couple more parameters still… (There are ways to cram these number together, but they cost some precision, which you probably can’t afford if you want accurate texturing.)
I’ll see if I can refine this idea. I need to work out the ins and outs of this problem anyhow.
I wasn’t aware we can build custom effects. Don’t remember they even announced this feature.
I will have to look into this.
Thanks for the info.
It got less fanfare than it deserved. :) There are a couple posts and a “Custom Shader Effects” sub-forum.
Anyhow, I do have some further ideas on what I mentioned yesterday. Basically, we need six numbers: four of these will define our second texture rect (various representations possible: the two corners; a center and offsets; or a center and two scale factors, relative to the first texture rect). The other two will be the center of the first texture rect (alternatively, the object position). Then the procedure basically becomes:
* Compare the object’s position or texture coordinates (via CoronaTexCoord ) against some reference point, in the vertex kernel
* From that, figure out which corner this is
* Given this, use the rest of our data to calculate the second texture coordinates
* Feed those into a varying and pull them out in the fragment kernel
(These terms should make more sense after you’ve done some research!)
Right now, “uniform userdata” aren’t available, or it would be pretty easy to feed in those six inputs. As it is, my best idea right now would be to just make a few effects, one per (overlay) tileset, and then hard-wire the sprite data into the code, e.g.
local function Vectors (sprite\_data) local n = #sprite\_data local vectors = ("P\_POSITION vec4 Vectors[%i];\n\n"):format(n) for i = 1, n do local data = sprite\_data[i] vectors = vectors .. ("Vectors[%i] = vec4(%i., %i., %i., %i.);\n"):format(i, data[i].x, data[i].y, data[i].width, data[i].height) -- the dots are just to make sure they're floats end end kernel.vertex = [[varying P\_UV vec2 uv2; // Our second set of texture coordinates; add this to fragment shader too P\_POSITION vec2 VertexKernel (P\_POSITION vec2 position) {]] .. Vectors(MySpriteData) .. [[P\_POSITION vec4 rect = Vectors[CoronaVertexUserData.y]; // y = frame index - 1 // Do calculations // Set uv2... return position; } ]] print(kernel.vertex) -- make sure it looks okay :)
Err, that entails some changes in what I wrote above too, but they should be fairly straightforward.
Anyhow, this is a lot to throw at you at once! Maybe it gives you some ideas, though.