something equivalent to ShapeObject's path on a *sprite*?

Here’s what I’m trying to do:…  I need a “deformable textured quad”, which I can easily manage with display.newImage(), setting its fill property to an image, then adjusting it’s path.property.

However, I also need that rect to have multiple frames, like a sprite, and preferably from an imagesheet (as opposed to discrete images)

Why?  Because I need to change frames a lot, and sprite:setFrame() is way faster than something like (for example)

-- once: local rectFills = { { type="image", filename="images/fill1.png" } { type="image", filename="images/fill2.png" } } -- frame loop: rect.fill = rectFills[frameCount%2+1]

I want about a hundred such rects and the performance of setting rect.fill can’t match calling sprite:setFrame.  (and yes, all images always remain cached, of course - no frameloop loading going on) 

[[I can toggle between two instances of my renderer to compare them.  Though, of course, the sprite method doesn’t look right without the deformations, but the function to deform is still “called”, that is: all the same path-setting math occurs even though it has no effect.  So “essentially” the only difference is rect.fill= versus sprite:setFrame().  On a certain test device the sprite version runs 3.8ms/frame, while the rect.fill version runs 22.3ms/frame, about 5-6x slower.  The ratio is similar on desktop simulator.]]

But alas, I can’t deform the sprite, (or, at least, not as far as I know!), and that’s a deal-killer too.

So,… Q:  Is there any other combination (even if somewhat “wacky”) of features that might be able to mimic the desired result?  That is:  a “deformable textured quad” with frame-changing performance equivalent (or at least comparable) to sprite:setFrame()?  There’s lots of new stuff in graphics/display and I’m hoping that maybe I’ve just overlooked a possibility?? 

Thanks in advance!

The deformation persists across frames, I assume? How many such sprites will you have?

Wackiness:

There are a few shader-based approaches I might suggest, but vertex userdata would be a non-starter (four corners with x and y data, plus the center and extent of the sprite). Uniform userdata would kill batching, though if you had a lot of the same sprite a data texture might be the way to go.

No examples or further details, since I need to get to sleep fairly soon, but I could elaborate later if those sound interesting.

That said, you could also make a four-vertex mesh and just directly update the corner positions and frame UVs (these being normalized [0, 1] coordinates rather than integer offsets).

Thanks for the ideas, gets me to thinking., off to play… :slight_smile:

But meshes won’t work (test attached).  I could get a 4 vertex quad using either fan or strip (difference only being which direction the diagonal winds), and deform it, but the texture will appear to “crease” along the diagonal whenever opposites sides aren’t parallel due to the 2d linear interpolation of uvs along that diagonal, and newMesh() doesn’t seem to have a “quads” (or “quadstrip”) mode that might better mimic the “flat” bi-linear 2d distortion that newRect.path deformation provides.   :frowning:   If it did, then yes, modifying the uvs to accomplish frame changes would work (would just need to test performance for all that xy/uv coordinate twiddling).

Ah, right. I suppose parallel lines are getting broken.

Near as I can tell from the peculiar uvs one ends up seeing, a parallel projection of some sort might be used when a path distortion is taking place.

Affine spaces do preserve parallel lines, so I suspect using affine uvs might work, though the shader would have to do the legwork. I do have some four-node quad code and puttered around a bit, unfortunately without success. I haven’t really documented it, though, and it’s a bit obscure, so I’m not sure I was using it right.  :slight_smile:

Actually, if you’ll only have a small number of frame sizes, a way to go might be to make frame-sized canvases and rig them up to your deforming objects. Whenever a frame changes, splat it into and invalidate the appropriate canvas. (Another idea I’ve had to avoid varying sizes is to use a shader that would simply encode its uvs as colors into a 2x2 canvas, using nearest filtering. So you give it a sprite frame and capture the corner pixels. Then in the regular shader, decode those and use them to do lookup as if from a normal image. But this might be expensive on some older hardware that doesn’t like dependent texture lookups.)

The deformation persists across frames, I assume? How many such sprites will you have?

Wackiness:

There are a few shader-based approaches I might suggest, but vertex userdata would be a non-starter (four corners with x and y data, plus the center and extent of the sprite). Uniform userdata would kill batching, though if you had a lot of the same sprite a data texture might be the way to go.

No examples or further details, since I need to get to sleep fairly soon, but I could elaborate later if those sound interesting.

That said, you could also make a four-vertex mesh and just directly update the corner positions and frame UVs (these being normalized [0, 1] coordinates rather than integer offsets).

Thanks for the ideas, gets me to thinking., off to play… :slight_smile:

But meshes won’t work (test attached).  I could get a 4 vertex quad using either fan or strip (difference only being which direction the diagonal winds), and deform it, but the texture will appear to “crease” along the diagonal whenever opposites sides aren’t parallel due to the 2d linear interpolation of uvs along that diagonal, and newMesh() doesn’t seem to have a “quads” (or “quadstrip”) mode that might better mimic the “flat” bi-linear 2d distortion that newRect.path deformation provides.   :frowning:   If it did, then yes, modifying the uvs to accomplish frame changes would work (would just need to test performance for all that xy/uv coordinate twiddling).

Ah, right. I suppose parallel lines are getting broken.

Near as I can tell from the peculiar uvs one ends up seeing, a parallel projection of some sort might be used when a path distortion is taking place.

Affine spaces do preserve parallel lines, so I suspect using affine uvs might work, though the shader would have to do the legwork. I do have some four-node quad code and puttered around a bit, unfortunately without success. I haven’t really documented it, though, and it’s a bit obscure, so I’m not sure I was using it right.  :slight_smile:

Actually, if you’ll only have a small number of frame sizes, a way to go might be to make frame-sized canvases and rig them up to your deforming objects. Whenever a frame changes, splat it into and invalidate the appropriate canvas. (Another idea I’ve had to avoid varying sizes is to use a shader that would simply encode its uvs as colors into a 2x2 canvas, using nearest filtering. So you give it a sprite frame and capture the corner pixels. Then in the regular shader, decode those and use them to do lookup as if from a normal image. But this might be expensive on some older hardware that doesn’t like dependent texture lookups.)