3D engine... Much interest?

Sometimes you go with the heart over the head and something great comes as the result!

I was actually thinking about voxel games when I first got the camera rotation working. At one point I had cubes all over the place to test perspectives and it did feel quite Minecraft-like.

As a gamer, I’m not a fan of the genre at all, but as a developer if somebody wanted to take this and do some voxel projects I’d love to see the result.

Too early to wrap this up as a useful plugin though.

Hmm. UV mapping could be an issue!

I’d built this to render the polygons in strip mode before, iterating through the vertices to create triangles using the last 2 vertices of the previous triangle and the next defined vertex as the three triangle points. This meant that each face of the cube was two triangles, which seemed perfect until I swapped out for a checkerboard test image and found that my hacked together UVs for the previous test then didn’t work so well.

I’ve now coded in a switch, so that each object can be rendered either in fan or strip mode, and I’ve reworked my cubes to use fan mode, an extra vertice in the middle of each face to fan triangles out from, and properly coded-in UVs for each face instead of my hacked together test.

The result is that my test cubes are now 4 triangles per face instead of 2. I.e. [X] rather than <|>. This gives a central point for UV projection, and the result is definitely better, but as this screenshot shows, there’s still a fair bit of skew going on…

Any advice on the best way to layout triangles for better UV projection?

don’t think it’s solvable with a 2d api alone.  if that top face were a true 3d square instead of a 2d trapezoid then the diagonals (where two triangles abut) would look correct because the uv interpolation from far-corner to far-corner would be the same for both.  (trapezoids don’t have diagonal symmetry)

plus you’d get perspective textures, which is also missing - ie, although you have perspective geometry, the texture mapping itself is still affine - leading to that “orthographic” sort of look.

further subdivision would reduce the scale of the error, but increase the total number of errors.

might (?) be possible to address with a shader (maybe @StarCrunch chime in?) if you can pass in your pseudo-z per-vertex in order to 1/z correct for it.  (caveat: i haven’t fully thought out the math here, might have to also “un-trapezoid” it in the shader)

Ah, that makes a lot of sense actually. I was thinking the face vertices all worked together to distort the texture using the defined uvs, but if the texture is applied per-triangle then I can see your point with perspectives…

Thinking hat is back on then…

Hmm.

I can make it work using quad distorted rectangles instead of real polygons, but of course this wouldn’t work for anything other than rectangular sides and it wouldn’t allow for properly defined UVs, which makes this approach no good for a 3D engine in my opinion.

Is this actually turning out to be a bug with how UVs are handled by polygon shapes? It’s as though the UVs of the individual triangle are used to distort the source image whilst keeping it a parallelogram, and then a triangular crop of that image is taken for the render, rather than the UVs being used to take a triangular crop and THEN distorting it to the shape of the rendered triangle, which as far as I can tell would result in the same perspective as with the quad distortion…

FYI polygons are unoptimised in Corona. Deformed rects perform much faster. I went for polygons in my game but performance sucked. Had to go with transformed rects and render speed doubled.

Unless work is done boosting polygon performance you will hit a bottleneck on anymore than a few hundred polygons.

I haven’t tested with more than 9 cubes at any one time yet, but at one point I had 4 triangles per face so a 108 poly render (3 visible faces per cube - the renderer already skips over any that aren’t facing towards the camera, though doesn’t yet skip over any that would be off-screen or hidden by others) and had no issues. I don’t really want to play at rendering too many until I’ve got more intelligent face culling going on, but to be honest I think if it can pull off a few hundred (I’m guessing you mean literal Corona polygons as opposed to a few hundred triangles?) that’ll be fine. Trying to fit more than that onto a mobile sized screen at any one time sounds a little insane, and I don’t intend for off-screen visuals to exist at any time so it’s all about the on-screen count.

That was the general approach with Qiso and it worked well. Any map data that’s loaded in is technically available to read/write at any point, but visually only the area that the screen covers is turned into display objects.

As you know I have large ISO worlds. I had to abandon polygons (even basic 4 sided ones). I moved to deformed rects for speed.

I originally had polys with strokes but moving to filled rects with transforms was 4x performance boost.

Corona is optimised for rects not anything polygon.

Ah, for your grounds. I do remember you saying now, yes.

The docs do mention that strokes on polygons are a performance killer when using triangle mode, but I’m not using either strokes or triangle mode (strip and fan at the moment) so that’s not a worry for now.

I’d be interested in knowing though, was your polygon ground literally one polygon per square, or one polygon with hundreds of vertices making up all of the squares?

One polygon per tile .

Moved to one deformed rect per tile with the PNG containing the stroke as part of the graphic.

Now I can have “3D” ground with no performance hit.

Literally the only way I can get Corona to perform decently is to have 2 groups.

I have the world group and the visible group.

I only insert into the visible group what is on screen and that is a major performance boost.

Everyone time the screen changes I work out what is visible and insert that into the visible group. Everything else gets inserted into the world group which is invisible.

There’s little point optimising what already works, but if you ever play with that idea again, I’d suggest trying to split your grid into polygons of say, 10x10 vertices but sticking with the in-png strokes. This way you could make a 100x100 tile grid from just 100 polygons instead of 10,000 rects, and you could still cull any that are off screen (unlike with a single polygon of a ridiculous number of vertices), but you’d also still be able to manipulate individual tiles to get your hills.

My gut tells me that a polygon set-up like that should outperform huge numbers of individual display objects, but again I’ve not done enough with polygons myself to do more than speculate.

Ah, now your two groups approach is an interesting one and not unlike my own.

Corona is obviously built with the intention of handling the rendering for you. You’re supposed to create display objects once and then transition them around, and if using the physics engine etc you pretty much need to go that route.

But I’m old-school. I understand game programming in terms of creating a “game loop” and inside that loop, clearing the screen, figuring out what needs rendering, creating all of the elements, and then looping back around to do the whole thing all over again. This goes against the Corona way and you’d expect it to increase the workload unnecessarily, but like you, I’ve found that it’s still the best way to go.

I store everything in data tables and I reference those from inside an infinite loop, that sits figuring out what should be on screen, rendering it all, and then looping back around to clear. For me, if I want a sprite to move 1px to the right, I don’t transition it, I change the appropriate block of data that says “the x of this is…” and then on the next iteration of the loop, that’s where my renderer will draw it.

Swings and roundabouts. On the one hand I can’t just tell a sprite to be a physics object and then throw it around. But on the other hand I can move millions of sprites in an instant by simply changing a single “camera x,y” value which the renderer uses to offset the position those sprites get rendered at on the next iteration. Then it’s just a case of reducing the loop to only iterate the data that correlates to what would be on screen instead of all those millions of sprites, and you’re flying. Table data makes that a breeze.

Example - say you’re creating a 2D platformer and you can only fit say, 11x5 tiles on screen at any one time. Create a 3D table of columns and rows, and then a camera x,y defaulting to 6,3, and then a renderer loop that iterates that table using camera.x - 5 to camera.x + 5 and camera.y - 3 to camera.y + 3. Within that iteration, simple read the data of the 3D table element at that position and render an appropriate sprite. To move one tile over you just need to adjust that camera value, and because your loop is literally ignoring the rest of the table, it doesn’t matter how big it gets. If it fits in the device memory, it’s not too big. Huge maps, millions of tiles, but an instant render of just 33 sprites.

Yeah for sure that works if all your sprites fill a single atlas (or 2). But when you have hundreds of large pngs and you cannot use atlases the load time for them becomes a problem.

The other “issue” Corona has and what grinds performance when you have a lot of items is background updating. So say you have a group and that contains thousands of items. Each time you move the group Corona has to go and update every child’s positional data.

Sorry, was just coming off a day of flying and have had crazy DNS issues since then, followed by nephews and niece underfoot.  :slight_smile:

The typical scene -> view projection is basically a similar triangles situation, with one of the legs being the “eye-to-screen” distance and one of the hypotenuses on the other triangle extending from the eye to the “real-world” version of a given pixel. (This is a bit more clear with a picture.) Division is definitely non-linear–it’s iterated subtraction, really–but 1/z is linear and can indeed be interpolated and corrected along the way. Some older consoles like PSX and 3DO skipped this, presumably for efficiency.

I do think a shader can be accommodated to this, but it would be less than ideal and  a struggle to cram it all into the rather narrow allowances for data. (And more importantly, difficult to remember the details.) I could see a case for some ways around this (making more vertex uniforms available, allowing general vertex descriptors), although the actual form these might take is totally up in the air. There are definitely ripe opportunities for some major advances here once user contributions are possible. (This is sort of the thing you have to know about to ask for, not to mention make a case in favor, so it’s rather an uphill battle to get traction.

My issue at the moment is that I’ve either misunderstood how UV mapping works, or Corona has a bug in its UV mapping algorithms…

Taking just the top face of the cube, this is constructed from a mesh of currently 4 vertices, and UVs mapping each of those vertices to the image corners. I’d expected from my basic understanding of 3D modelling (I’m not a designer, but i’ve dabbled) that the image would be rendered to the whole face, using the UVs to skew appropriately. If this was the case, I’d have gotten the same result as with the most recent screenshot, which is a quad skewed rect instead.

But it seems that actually, UV mapping renders the image separately to each triangle, and the skew to do that seems to keep the image as a parallelogram. In other words, using a mesh my trapezoid is constructed of a triangle in the top left and a triangle in the bottom right. The top left triangle passes the vertices that are mapped to the top left, top right, and bottom left of the image and the bottom right triangle passes the vertices mapped to the top right, bottom left, and bottom right. Corona, rightfully or not, is therefore calculating a position for the remaining corner that keeps the image sides parallel, then taking a triangular crop for the render, and doing this for each of the two triangles.

Is this the expected behaviour or should the image actually be rendered to the face as a whole, using all of the face UVs for the skew?

If UVs had worked how I expected, then recreating the cubes with 98 vertices instead of 8 (1 vertex per corner, 3 per each side between the corners, and 9 inside each face) would have allowed for 4 rows of perspective to each face, which would create a reasonable perspective when rendering a checkerboard to that top face. Unlike with the quad skewed image test which, unsurprisingly since it has no UVs, just results in a uniform skew.

I suppose one thing I could (should?) do, is leave the polygons to render an unmapped, centered image with no distortion, but then create that image on the fly by perspective-mapping each of the source image pixels to a new canvas, using my UVs to distort that perspective placement… this though, is likely not something that can be achieved fast enough outside of an assembly build.  :mellow:

If my original understanding isn’t correct and the way Corona is UV mapping to each triangle isn’t a bug, then this may have to end up a colour-only 3D engine. Any interest in a hybrid 3D - FlatUI engine?  :huh:

if, for the particular display object in question, the primitive that corona emits is GL_QUADS then you won’t get the two-triangle seam tearing.  if otoh corona emits GL_TRIANGLES, then it’ll tear when resulting implied-two-triangle-quad is non-symmetric and the uv-mapping requires a true symmetric quad.

(keep in mind that any single triangle has no “knowledge” of what “greater form” it may imply if other adjacent triangles are present, it textures itself based only on it’s own uv’s and has no access to some other triangle’s far corner of an implied quad)

for the typical case of “real” 3d it doesn’t matter - because the geometry is “static” (you can translate it, but typically not deform it) so if the original uv-mapping was correct, then it’ll remain correct through any transform, whether it was based on triangles or quads.

BUT, if the geometry itself changes by free-form deformation (ie, by other than something like a uniform scaling, or a simple reflection) like a square turning into a trapezoid, then the original uv-mapping will be distorted.  And this is what is happening (constantly) in a renderer like yours.

Suggest you draw a trapezoid on paper.  Now draw one of the diagonals (say lower-left to upper-right).  Now mark a dot at the true midpoint of that diagonal, that’s uv=0.5,0.5.  Now draw dashed lines from that midpoint to the remaining two corners (to upper-left and lower-right).  Now imagine texturing those dashed lines - the top-left one runs from 0,0 to 0.5,0.5 but is a very short distance, the lower-right one runs from 0.5,0.5 to 1.1 (same delta as other) but covers a longer as-projected-on-screen distance, so clearly one will appear more stretched relative to the other.  But also note that the two lines are not parallel, so they’ll seem “skewed”.

Now draw the other diagonal (from top-left to lower-right) and note that it doesn’t cross the first diagonal at the midpoint (in fact, challenge question:  what WOULD the uv be of that intersection point?)  This difference in where the two implied “middles” are is what causes your problem, because the uv-mapping from all four corners to 0.5,0.5 is not equivalent.  Further, that same similar situation occurs at ALL points along the diagonal seam between the triangles, and so arises the “seam-tearing” description. 

Compare: a true 3d square, just tilted away from the camera to LOOK like a trapezoid. the two diagonals intersect at 0.5,0.5

(though i keep using trapezoids as an easy to visualize perspective example, similar artifacts would occur for any general quadrilateral formed from two non-symmetric triangles)

the casual games that SGS envisions wouldn’t need texture mapping, flat shading would suffice

p.s. answer to challenge question:  that point is where uv 0.5,0.5 would be if you could apply your perspective transform to uv’s