3D engine... Much interest?

Literally the only way I can get Corona to perform decently is to have 2 groups.

I have the world group and the visible group.

I only insert into the visible group what is on screen and that is a major performance boost.

Everyone time the screen changes I work out what is visible and insert that into the visible group. Everything else gets inserted into the world group which is invisible.

There’s little point optimising what already works, but if you ever play with that idea again, I’d suggest trying to split your grid into polygons of say, 10x10 vertices but sticking with the in-png strokes. This way you could make a 100x100 tile grid from just 100 polygons instead of 10,000 rects, and you could still cull any that are off screen (unlike with a single polygon of a ridiculous number of vertices), but you’d also still be able to manipulate individual tiles to get your hills.

My gut tells me that a polygon set-up like that should outperform huge numbers of individual display objects, but again I’ve not done enough with polygons myself to do more than speculate.

Ah, now your two groups approach is an interesting one and not unlike my own.

Corona is obviously built with the intention of handling the rendering for you. You’re supposed to create display objects once and then transition them around, and if using the physics engine etc you pretty much need to go that route.

But I’m old-school. I understand game programming in terms of creating a “game loop” and inside that loop, clearing the screen, figuring out what needs rendering, creating all of the elements, and then looping back around to do the whole thing all over again. This goes against the Corona way and you’d expect it to increase the workload unnecessarily, but like you, I’ve found that it’s still the best way to go.

I store everything in data tables and I reference those from inside an infinite loop, that sits figuring out what should be on screen, rendering it all, and then looping back around to clear. For me, if I want a sprite to move 1px to the right, I don’t transition it, I change the appropriate block of data that says “the x of this is…” and then on the next iteration of the loop, that’s where my renderer will draw it.

Swings and roundabouts. On the one hand I can’t just tell a sprite to be a physics object and then throw it around. But on the other hand I can move millions of sprites in an instant by simply changing a single “camera x,y” value which the renderer uses to offset the position those sprites get rendered at on the next iteration. Then it’s just a case of reducing the loop to only iterate the data that correlates to what would be on screen instead of all those millions of sprites, and you’re flying. Table data makes that a breeze.

Example - say you’re creating a 2D platformer and you can only fit say, 11x5 tiles on screen at any one time. Create a 3D table of columns and rows, and then a camera x,y defaulting to 6,3, and then a renderer loop that iterates that table using camera.x - 5 to camera.x + 5 and camera.y - 3 to camera.y + 3. Within that iteration, simple read the data of the 3D table element at that position and render an appropriate sprite. To move one tile over you just need to adjust that camera value, and because your loop is literally ignoring the rest of the table, it doesn’t matter how big it gets. If it fits in the device memory, it’s not too big. Huge maps, millions of tiles, but an instant render of just 33 sprites.

Yeah for sure that works if all your sprites fill a single atlas (or 2). But when you have hundreds of large pngs and you cannot use atlases the load time for them becomes a problem.

The other “issue” Corona has and what grinds performance when you have a lot of items is background updating. So say you have a group and that contains thousands of items. Each time you move the group Corona has to go and update every child’s positional data.

Sorry, was just coming off a day of flying and have had crazy DNS issues since then, followed by nephews and niece underfoot.  :slight_smile:

The typical scene -> view projection is basically a similar triangles situation, with one of the legs being the “eye-to-screen” distance and one of the hypotenuses on the other triangle extending from the eye to the “real-world” version of a given pixel. (This is a bit more clear with a picture.) Division is definitely non-linear–it’s iterated subtraction, really–but 1/z is linear and can indeed be interpolated and corrected along the way. Some older consoles like PSX and 3DO skipped this, presumably for efficiency.

I do think a shader can be accommodated to this, but it would be less than ideal and  a struggle to cram it all into the rather narrow allowances for data. (And more importantly, difficult to remember the details.) I could see a case for some ways around this (making more vertex uniforms available, allowing general vertex descriptors), although the actual form these might take is totally up in the air. There are definitely ripe opportunities for some major advances here once user contributions are possible. (This is sort of the thing you have to know about to ask for, not to mention make a case in favor, so it’s rather an uphill battle to get traction.

My issue at the moment is that I’ve either misunderstood how UV mapping works, or Corona has a bug in its UV mapping algorithms…

Taking just the top face of the cube, this is constructed from a mesh of currently 4 vertices, and UVs mapping each of those vertices to the image corners. I’d expected from my basic understanding of 3D modelling (I’m not a designer, but i’ve dabbled) that the image would be rendered to the whole face, using the UVs to skew appropriately. If this was the case, I’d have gotten the same result as with the most recent screenshot, which is a quad skewed rect instead.

But it seems that actually, UV mapping renders the image separately to each triangle, and the skew to do that seems to keep the image as a parallelogram. In other words, using a mesh my trapezoid is constructed of a triangle in the top left and a triangle in the bottom right. The top left triangle passes the vertices that are mapped to the top left, top right, and bottom left of the image and the bottom right triangle passes the vertices mapped to the top right, bottom left, and bottom right. Corona, rightfully or not, is therefore calculating a position for the remaining corner that keeps the image sides parallel, then taking a triangular crop for the render, and doing this for each of the two triangles.

Is this the expected behaviour or should the image actually be rendered to the face as a whole, using all of the face UVs for the skew?

If UVs had worked how I expected, then recreating the cubes with 98 vertices instead of 8 (1 vertex per corner, 3 per each side between the corners, and 9 inside each face) would have allowed for 4 rows of perspective to each face, which would create a reasonable perspective when rendering a checkerboard to that top face. Unlike with the quad skewed image test which, unsurprisingly since it has no UVs, just results in a uniform skew.

I suppose one thing I could (should?) do, is leave the polygons to render an unmapped, centered image with no distortion, but then create that image on the fly by perspective-mapping each of the source image pixels to a new canvas, using my UVs to distort that perspective placement… this though, is likely not something that can be achieved fast enough outside of an assembly build.  :mellow:

If my original understanding isn’t correct and the way Corona is UV mapping to each triangle isn’t a bug, then this may have to end up a colour-only 3D engine. Any interest in a hybrid 3D - FlatUI engine?  :huh:

if, for the particular display object in question, the primitive that corona emits is GL_QUADS then you won’t get the two-triangle seam tearing.  if otoh corona emits GL_TRIANGLES, then it’ll tear when resulting implied-two-triangle-quad is non-symmetric and the uv-mapping requires a true symmetric quad.

(keep in mind that any single triangle has no “knowledge” of what “greater form” it may imply if other adjacent triangles are present, it textures itself based only on it’s own uv’s and has no access to some other triangle’s far corner of an implied quad)

for the typical case of “real” 3d it doesn’t matter - because the geometry is “static” (you can translate it, but typically not deform it) so if the original uv-mapping was correct, then it’ll remain correct through any transform, whether it was based on triangles or quads.

BUT, if the geometry itself changes by free-form deformation (ie, by other than something like a uniform scaling, or a simple reflection) like a square turning into a trapezoid, then the original uv-mapping will be distorted.  And this is what is happening (constantly) in a renderer like yours.

Suggest you draw a trapezoid on paper.  Now draw one of the diagonals (say lower-left to upper-right).  Now mark a dot at the true midpoint of that diagonal, that’s uv=0.5,0.5.  Now draw dashed lines from that midpoint to the remaining two corners (to upper-left and lower-right).  Now imagine texturing those dashed lines - the top-left one runs from 0,0 to 0.5,0.5 but is a very short distance, the lower-right one runs from 0.5,0.5 to 1.1 (same delta as other) but covers a longer as-projected-on-screen distance, so clearly one will appear more stretched relative to the other.  But also note that the two lines are not parallel, so they’ll seem “skewed”.

Now draw the other diagonal (from top-left to lower-right) and note that it doesn’t cross the first diagonal at the midpoint (in fact, challenge question:  what WOULD the uv be of that intersection point?)  This difference in where the two implied “middles” are is what causes your problem, because the uv-mapping from all four corners to 0.5,0.5 is not equivalent.  Further, that same similar situation occurs at ALL points along the diagonal seam between the triangles, and so arises the “seam-tearing” description. 

Compare: a true 3d square, just tilted away from the camera to LOOK like a trapezoid. the two diagonals intersect at 0.5,0.5

(though i keep using trapezoids as an easy to visualize perspective example, similar artifacts would occur for any general quadrilateral formed from two non-symmetric triangles)

the casual games that SGS envisions wouldn’t need texture mapping, flat shading would suffice

p.s. answer to challenge question:  that point is where uv 0.5,0.5 would be if you could apply your perspective transform to uv’s

Bingo!

My original understanding of UV mapping was indeed wrong, but it doesn’t matter. @StarCrunch your link to the psx_retroshader turned out to be all I needed! Down at the bottom of that page is a screenshot that confirmed the results I was seeing are to be expected and that more triangles is all that’s needed, so I went with it. My cubes now consist 24 triangles per face, which is enough to both fix the checker map, AND add the perspective that I was hoping this number of UVs would give me. Perfect!

I’ll do a proper video of this running on Android, but for now https://development.qweb.co.uk/test4/

Not sure how well a HTML build of this will run… it seems fine for me, but in general the HTML builds never perform as good and this is now a 576 poly scene (48 each for the 3 cubes directly in front of the camera, 72 each for the other 6 because off-screen faces still aren’t culled yet). That front cube should rotate half a degree every 20ms, which is about one full rotation every 14.5 seconds.

There’s definitely room for optimisation in this now so if it’s already performing well enough for people in general, then that’s fantastic.

yeah HTML performance is shit compared to proper builds but seems you’ve fixed the UV issue.  Good job mate!

@richard11 Ah, great. Totally unintentional hint, but I’m glad it helped.  :smiley:

Exactly the same as the HTML build, but here’s an APK demo for Android users that properly shows the render quality and performance, even while totally unoptimised: http://development.qweb.co.uk/3D%20test.apk

And a screenshot for non-Android users:

the texture mapping itself is still affine, not perspective - but if no-one besides me notices then who cares?  the additional subdivision is enough to approximate it well enough at this scale.  perhaps you could make the degree of subdivision variable, in case someone needs significantly larger faces (fe, try a single cube at 9x scale in place of the 9 at 1x, would artifacts return yet?  at 16x?  etc)

The engine will eventually include functions to add all the basic shapes - cube, sphere, cone, pyramid, etc - but also a function to import .obj models, which would obviously mean proper control over UVs etc. There’ll likely also be an ability to edit object data on the fly, so it’d be possible to create something like a cube and then replace a particular side to have more vertices.

I think this is a target to aim for - https://www.youtube.com/watch?v=ZVylCqhs5B8.  Not sure if it is possible?

Or maybe more simple like - https://www.youtube.com/watch?v=0UaIKlpiOZg

Or - https://www.youtube.com/watch?v=XRgU1USrohQ

I think they’re all achievable =). Although that second one looks to be 2D sprites just being enlarged as they get closer to… potentially. Hard to tell at that speed!

The only bit that worries me now is shadows. It’d be great if I can make those work without killing performance but I’m a fair way off being able to get stuck into that I think…

Pushes Corona into new territory!

This might need core integration for render speed?

Into a new dimension really ;-).

This has become a weekend project now. I finally managed to drag myself back off of it to get some of our more important projects progressing, so updates might be a little slow now until it’s complete enough to hit the marketplace. Of course, at that point if the response is positive I’d be able to put more priority towards it again.

Anyway, to update, I’ve added in a bunch of the fundamental functions that make this more of a useful engine now, incorporated most of the general code optimisations that I’d been meaning to add (use of locals etc), added in an adjustable draw distance, and started documenting the existing functions.

I’ve come across a strange bug where if you rotate the camera so that it faces approximately 90 degrees away from an object, the rendering goes totally crazy… I was hoping to get some higher quality video demos online by now but this bug needs squatting first.

Ambient lighting and positioned lights are next on the list. Contrary to my earlier posts, I’ve even now conceptualised a method for casting shadows that might actually work reasonably well and with minimal overhead, but this will need to come later on…

Still need to better optimise the way vertices are defined and introduce more shapes than just the cubes, plus an obj import mechanic so that 3D models can actually be dragged in for rendering. And I still need to implement off-screen culling + probably re-think the way objects in the world are stored, for more efficient iteration of just the relevant area.

Oh, and there’s still intersecting faces to figure out. That’s the biggy.

Ambient lighting works =)

https://youtu.be/O4-dgfAVXpU