3D engine... Much interest?

I think it’s pseudo once you start doing 3D projections/illusions on top of already processed 2D projections…

That’s kind of my point though. The engine I’m making has x,y,z… 3 variants of in fact. The world is an x,y,z grid and then individual objects have internal x,y,z space relative to their centres, and the camera has x,y,z space relative to itself, where z is how deep into the screen you’re looking. Programmatically in this regard, it’s ‘proper’ 3D. But beyond that it’s still just drawing 2D shapes at projected x,y coordinates on a flat screen, creating the illusion of a 3D environment. It’s not really actual 3D is it? Just maths and perspective trickery. Corona/Solar2D is after all a 2D kit.

The same could be said for 3D modelling software. Blender for example. It’s doing a far better job of it than I could but at the end of the day it’s still just taking pseudo world coordinates and projecting those to create something that looks 3D on a 2D screen. The lighting is better and it does all sorts of fancier maths to create reflections and whatnot, but at the core it’s still just maths and perspective trickery. Your screen is still x,y space.

So, take a 3D render from Blender and use that as a backdrop in a game. Take another 360 renders of a character to create a 2D sprite that can move around overlaying that backdrop, creating the illusion of a 3D scene from just two 2D image files and a bit of math and what do you have, 2D or 3D? We’d all call this fake, including me, but my line would be blurry. The grunt work was preprocessed creating an image that can just be dropped right in rather than having to do that work on the fly in-game, but either way you’re looking at a 3D environment. You’re moving a character around a world that has some perspective to it, and you’re able to turn that character around to see all sides. It’s technically just as 3D as anything else, but less resource intensive. More efficient.

It would be great to be able to control a camera x/y/z and to be able to work with .fbx or .obj files. I agree a pseudo 3D would be just fantastic without having to push the engine too far.

I do love that this thread is still active :grinning:. Definitely picking back up on this as soon as I’ve gotten the new marketplace launched. Shouldn’t be long now…

That’s good news Richard, any move to some kind of 3D and perspective views would be a great move forward for the engine.

@richard11 - even a simple 3D engine with a coordinate system that calculates how to view a 2D world with real 3D perspectives and lighting would be awesome! That should be the first step, IMHO. Until that’s working, I wouldn’t worry about more complex rendering.

1 Like

@richard11 At long last, I’ve got 3D objects up and running with hardware depth buffer support and integrated into the display hierarchy, details here. Only tested on Windows, but I’ll see about Mac in a couple days. (As for the others, I still need to learn the respective processes.)

Apart from testing and such, I might take a swing at a Vulkan backend before finally submitting the PR, just to tease out any major GL-centric assumptions it might be making, but could maybe put builds up somewhere. In any case, the code can be forked in the meantime.

Also, the 3D things I do have are pretty low-level, basically at the “building blocks” level. So it’s stuff you could build on top of in your own case, rather than a competing idea.

1 Like

Brilliant work. I’m guessing the video you made explains everything so I’ll jump on that asap - I don’t currently have enough time to sit and watch but I’m sure I’ll get chance at some point today.

I see you’ve thrown some depth stuff in there which is likely the past I’m mostly interested in. I’m definitely intrigued at least, and hopeful this is something I can make use of.

@StarCrunch I’ve watched the vid now. Living on an F1 race track must be interesting, ha.

As usual with your work, I’ve only loosely followed what you’re doing to be honest. This stuff clearly makes sense in your head but in words I was lost within minutes… probably because I don’t speak OpenGL.

I think I followed what you were trying to demonstrate with the stencil buffer though, and your 3D vector example at the end looks fantastic. I think you’re basically doing the same rendering approach as I am, but through OpenGL directly where I’m having to translate the 3D vertex locations back into 2D screen positions.

For me to implement z buffering within the renderer would mean iterating each pixel of the rendered shape, calculating its pseudo z value of that pixel based on proximity to the literal defined vertices and their own z’s, and then checking that figure against my own buffer to see if there’s already a drawn pixel with a closer z than that - punching holes in the current shape if so, and writing the new z to the buffer if not. This is where I’m going to struggle with performance.

If you’re passing the 3D positions of each vertex to OpenGL though, and OpenGL already does z buffering at a hardware level, does this mean that your shapes will just automagically inherit per-pixel checks and effectively intersecting faces will just work out of the box?

If that’s correct then this plugs the whole perfectly. If you can throw together some more Lua-side examples on how to actually use this stuff, that would be brilliant. If I’ve understood enough of this, I don’t think it would be too tricky to rework my engine to pass 3D polygons instead of translating them back into 2D, and passing the z buffer work over to OpenGL should be a performance leap.

Quite excited by this. Thanks!

@richard11 Yep, the bit about the hardware z checks is the key point.

Attached is what I was using in that second video: main.lua (6.2 KB)

I added in a quick bit with polygons too. Actually, I’ve been thinking these past few days that those were mysteriously not working, but my test case here was just at a right angle to the view direction, more or less. After some rotation, there it was! sigh

The “scoped group” and “depth state” are some of the novelties. The scoped group is a display group but also broadcasts a message to its children before and after drawing. The depth state is a consumer of those messages, that says “remember what the GL state is now” and then “restore what I remember”. The depth state is a “display” object, and when it “draws” will update the GL state according to whatever properties it has assigned.

This could be done differently, say by wiring state changes into effects instead, but it seemed ultimately more flexible to me. Time will tell, I guess.

The matrices used here aren’t strictly necessary if you’re directly calculating world positions.

The “customization” can involve a few things (the instancing sample adds a custom property), but here it only involves a few minor shader rewrites. Basically, every shader including the default is molded from this template. But several details there are specifically 2D. So some changes are applied by the transformation defined here.

These modification-only scenarios would actually lend themselves to a Lua API. I’ll look into that.

There’s a weird quirk that came up while testing meshes / display objects (the first objects where “regular” drawing was wanted, versus updating some state), in that if you disable culling and hit tests --in the 2D sense–the object never actually gets validated and thus drawn. At the moment I’m just not doing the disables, but will try some things to address this.

1 Like

Very interesting. You’re giving me an itch with this stuff…

Should be done with the new marketplace today - just polishing off a few bits before launch now. I’m absolutely definitely making time for 3D again next!

1 Like