Corona is the only runtime not supporting meshes?

Hey gang. This thread is getting a little long and it’s really not about us not supporting meshes any longer. Can I suggest that you start new threads to ask questions about Spine-Corona integration that can be more focused? Spinehelper should certainly be it’s own thread.

Thanks

Rob

FWIW, I posted some info on the feature request.

Thanks Nate.

Just curious to clarify what you said in the feature request comments:

  1. Are you offering some general advice that the implementation of the meshes in Corona (when/if that starts happening) should favour single scene graph node as it’s more effective?

  2. Or are you commenting on work that’s already been done on this (in Corona or the StarCrunch work) that their individual scene node approach is not ideal?

Yes, to both. I haven’t had a change to look into StarCrunch’s work, but using individual scene graph nodes is the only option because Corona doesn’t have an API to render meshes. An API that did that would be relatively simple:

clear()
Clears all vertex data.

add(texture, vertices, uvs, triangles, r, g, b, a)
This stores vertex data that will be rendered later. Parameters:

texture is the image to use to texture the mesh.
vertices is an array of x,y pairs for each vertex.
uvs is an array of u,v pairs for each vertex (matches the vertices array).
triangles is an array of vertex index triplets.
r,g,b,a is the color to tint all vertices.
additive if true, the mesh should be drawn with additive blending.

Let’s call the Corona object that has these two methods a “MeshBatch”. The Spine runtime would go thru the attachments and call add() for each one. The add() method would store (copy) all the parameters in a list. Presumably the MeshBatch has a single, large VAO (or VBO). When the MeshBatch needs to draw, it goes through the list, copies the vertex data into the VAO, and renders it.

This API is simple to use and allows the MeshBatch to handle texture and additive blending changes. If the MeshBatch encounters a texture (or additive blending if PMA is not used) that is different from the last texture, it renders the VAO and starts batching for the new texture.

Many game toolkits use exactly what I’ve described. Often they are able to drawing immediately when add() is called if the texture or blending changes. Examples of that are:

libgdx, Java:
https://github.com/libgdx/libgdx/blob/master/gdx/src/com/badlogic/gdx/graphics/g2d/SpriteBatch.java
Starling, AS3:
https://github.com/EsotericSoftware/spine-runtimes/blob/master/spine-starling/spine-starling/src/spine/starling/PolygonBatch.as
cocos2d-x, C++:
https://github.com/EsotericSoftware/spine-runtimes/blob/master/spine-cocos2dx/3/src/spine/PolygonBatch.cpp

Corona’s MeshBatch would need to store the data passed to add() because it isn’t drawing until later. Other game toolkits also do this, eg:

XNA, C#:
https://github.com/EsotericSoftware/spine-runtimes/blob/master/spine-xna/src/MeshBatcher.cs

This API is slightly fancier than I described because it pools “MeshItem” objects. Instead of add() you call NextItem() and configure the returned MeshItem with the vertex data. After it draws it clears the MeshItems, but Corona’s API would keep the MeshItems until clear() is called.

Anyway, Corona could either provide an API like the above (which is generally useful for mesh rendering even without Spine), or a single scene graph node that renders an entire Spine skeleton. Since a single node would do the batching at draw time, it could draw immediately when add() is called (like libgdx, Starling, cocos2d-x) and wouldn’t need to cache as much or store it for as long (like XNA).

The other thing I’d like to see for Corona is a texture atlas with named regions. Using numbers for regions is extremely limiting.

Hi Nate.

Awareness of the possibility of being able to swap attachments in and out was one of the reasons I’ve been hesitant to PR the parts specific to Corona (I’m the same  ggcrunchy from GitHub, by the way). I did try to follow the approach used by images but could only really guess about how it would all unfold in practice.

The technique I used was to generate all the triangles I would need and stuff them into a group. Every triangle has the same geometry: (0, 0), (1, 0), (0, 1). The x- and y-coordinates are supplied to them as shader inputs, aggregated into arrays. Ditto the uvs.

In the vertex shader, the index is resolved as x * 2 + y (or maybe the other way around, I forget), so 0, 1, or 2. This is used to look up the x and y, which the shader emits, and the uv, which is passed along to the fragment shader. From there it’s business as usual.

On mesh creation, I build a vertex-to-triangle lookup data structure. When a position or uv is updated, the change is propagated to all affected triangles.

This is a bit redundant and obviously not as performant as a dedicated solution would be, but does the job. In small meshes we should be below the break-even point anyhow. (I have two alternative techniques that forgo the lookup shenanigans, storing the uvs, and in one case the positions as well, in textures. This allows for batching, but imposes some annoying limits. Namely, they need to satisfy some GLSL limits, which constrain x and y to being integers in [0, 1024]. Some of this can be avoided by shifting the local coordinates, but that falls apart if the mesh itself has dimensions in excess of 1024.)

I actually assumed color worked at a finer granularity (e.g. per vertex), so was meaning to do some refactoring. If not, both that and the blend mode would simply entail iterating the triangles and setting the appropriate property.

The texture switch might be a bit more brutal. I think swapping the fill will wipe all the shader properties. (Maybe I could pester a little bit to get some support for reading out the properties. I’m doing this in a slightly not-yet-official way…)

A clear could be effected by just throwing away this whole data structure, really. At the moment this won’t be terribly fast, but some judicious recycling of triangles ought to mitigate that. For one, the last paragraph’s concern would be irrelevant.

I never looked into the atlas thing, simply because I didn’t have a test case (kind of a vicious cycle  :)) but I think this could be a simple wrapper over a few other things.

Agreed about the general usefulness of meshes. Sadly, I’ve been distracting myself with other little projects and haven’t gotten around to playing with this stuff more generally.

First off, A big thanks to StarCrunch for getting meshes working. 

I have been working with the library from StarCrunch to use meshes on my project, but I am running into an issue. I need to be able to scale the animated character, but when I try scaling the character, everything scales except for the parts that are meshes. Maybe I am missing something, but is there a way to get the meshes to scale with the rest of the character? Or at least get a handle for the meshes to scale them manually? Hopefully this makes sense. 

Thanks!

@ tymadsen

As I said in the previous post, the underlying geometry is a soup of triangles with local coordinates: (0, 0), (1, 0), (0, 1). I take advantage of the fact that these will also happen to be the texture coordinates. In the mesh code, in impl/unbatched.lua , you’ll see this line:

return corner + pos - CoronaTexCoord;

CoronaTexCoord will match one of the expected local coordinates. It’s subtracted from pos (Corona’s calculated position for the corner) since we don’t actually care about this little local triangle: we only needed something that wasn’t degenerate. So all three corners will map to the content coordinate of (0, 0), and then corner is added to that.

Scaling will slightly break this assumption. It should already be baked into pos (are the meshes scaling, but wrongly?), so you’d need to scale the correction as well as the corner’s contribution. Something like:

return pos + (corner - CoronaTexCoord) \* scale;

There’s still some unused space in the mesh data structure, so this might be possible to hack in. My thinking is, in either Populate() or ResolveIndices() in that same file, you could sneak in a little dummy rect:

mesh.dummy = display.newRect(mesh, 0, 0, 1, 1) mesh.isVisible = false

then, in UpdateUV(), before the loop, check its size:

local w, h = mesh.dummy.contentWidth, mesh.dummy.contentHeight

and in the loop itself replace the lines

for i = 7, 9 do uvs[i] = 0 end

with

uvs[7], uvs[8], uvs[9] = w, h, 0

to spam its scaling to all triangles. I believe the uvs are constantly updated so this should stay in sync.

(I used a 3x3 matrix to be able to randomly access the corners’ three coordinates but had no use for the third column.)

Finally, amend that line earlier to:

return pos + (corner - CoronaTexCoord) \* vec2(u\_UserData3[2][0], u\_UserData3[2][1]);

and cross your fingers.  :slight_smile:

This is all off the cuff (it’s a bit late here) but seems at least plausible to me. In theory a similar idea could be used to effect rotations, but that’s left as an exercise for the reader.

@StarCrunch

Thanks for the reply! I tried your suggestion, and the mesh is scaled initially, but it is not updated while animations are going if the scale is changed. I tried multiplying the height and width of the dummy rect by a scale I pass into the UpdateUV function, but to no avail. Maybe I am trying to scale the wrong thing here? 

I am not familiar enough with the code you have written to understand really what is going on with the UVs and triangles. I am fairly new to using spine and meshes, so most of the technical stuff goes over my head. Anyhow, I appreciate you taking the time to help. For now I will have to find a different solution.

Thanks again.

@ tymadsen If I find a chance maybe I’ll putter around with it, though it might be a while. Seems like a reasonable thing to have available.

Great to see this has been started on http://feedback.coronalabs.com/forums/188732-corona-sdk-feature-requests-feedback/suggestions/8614108-add-meshes-support-for-spine

Will keep an eye on developments. Any word as to how long this will take?

We can’t provide dates. Got a lot of testing to do.

Rob

Yes, this is great news. I’d expect some sort of display.newMesh ?

It will be something like that.

I hope not. Imagine a skeleton with many different mesh attachments. Usually not all are visible at the same time. Are all the mesh scene graph objects created up front for every skeleton instance? Are they created as needed? Are they destroyed when no longer in use? The skeleton would have to jam the visible scene graph objects in the scene and shuffle them around as their draw order changes. All of this adds complexity and hurts performance. It is avoided by using a MeshBatch type of implementation like I described above. It is very simple and removes all the unnecessary layers, and not just for Spine rendering.

Certainly it will be counter productive to come up with a solution that doesn’t take real world use into account. Esoteric are handing the design of this on a plate. Is it safe to believe that Corona engineers are working in tandem with Esoteric on the implementation of this feature?

May I respectfully request that we don’t speculate and wait until we actually release this?

Thanks

Rob

I’m sure the Corona guys will do what they feel is best considering all the goals they have.

Composition of multiple scene graph objects has overhead. It’s quite common for frameworks where use of a scene graph is mandatory to need to do lower level rendering to bypass that overhead. Also, by far the biggest drawback to using a scene graph is that it couples the model and view. In this case the model is the Spine data. We have all the data for what we want to render, it’s just a matter of giving it to Corona for rendering. Using multiple scene graph nodes for rendering means the data gets spread across many scene graph nodes. This same pattern repeats for other rendering, so a lower level batching API would be widely useful. Plus, it’s likely this functionality already exists behind the scenes, it’s just a matter of exposing an API.

how is it going?

We are making good progress.

Rob

If fact, here is more information on the progress: https://forums.coronalabs.com/topic/63441-displaynewmesh-preview/#entry329267

Rob