3D engine... Much interest?

It’s not a Corona limitation as such. Just that .obj files are plain text and reasonably documented so an importer can be built around them fairly easily, and most modelling software can export to them. I’d be happy to look at importing other formats too if the initial release gains traction.

Importing 2000 models would be an interesting test. Currently the way world’s are set up isn’t particularly optimised, but the goal is to have them work much like in Qiso, where everything outside of screen space is ignored so as long as you’re not trying to actually display them all at once, it should just be a case of the device needing enough memory to store that much data at once.

The trick, in case interested, is arrays. Rather than iterating the entire world of data to look for elements that fit within the screen boundaries, my engines structure everything into arrays and then just reverse each screen corner to really quickly figure out which part of the arrays need iterating for on-screen content. That way it doesn’t matter how large the arrays get, their size doesn’t impact on the rendering performance. How much is actually on screen at the same time does, but not how big the world itself is.

I would love to see a 3D Corona engine! I’ve tried with some engines, and by far Corona is my favorite because it is lite, very fast, and lua is easy to use. And it would be awesome to see those features in a 3D engine.

Hi, I am quite new to Solar2d and stumbled about this thread …

For some of my plans the option to use simple 3D objects would be perfect. Can this 3D engine already be tried out? Or are there other similar solutions?

Thanx,
Christian

I’m afraid it’s been a while since I had chance to work on the 3D engine. The fundamentals are all in - objects and the camera can be moved around and rotated, face culling is in place, basic lighting is in place, etc. But currently only cubes exist - there’s no functionality to create other shapes or to import meshes, although the renderer would already be able to display them if they existed in the world.

This project is definitely still happening. As soon as I get back on to it I’ll focus on importing .obj files so that an initial beta can be released. It’ll definitely be far from optimised and until it has the ability to handle intersecting faces, it’s not going to be useful for doing anything more than displaying a few 3D objects, but other than that it should be usable.

Also now toying with the idea of building a voxel engine too for the record :smirk:. One thing at a time huh.

1 Like

Thank you for this information, so I keep on hoping …

For games with 3D elements I can recommend Defold or Godot. If Unity and Unreal are an overkill.

Keep up the good work… I’m interested in 3D - I’m currently working on a retro first-person grid based RPG (dungeon master, eye of the beholder style) using a pseudo 3d projection.
It’s still over 6 months away from finishing ( been at it for about a year now) so I’m almost ready show you guys.

It’s not true 3D - and more of an illusion of 3D but I’m happy with the results.
Id definitely consider a sequel in a proper 3d engine.

The term “pseudo 3D” always puzzles me slightly. Screens are still a flat plane - all 3D rendering is pseudo projection really - just some approaches equate to more accurate results than others =).

Pseudo 3D or 2.5D really just refers to cases where 2D games are faked to look like 3D games.

For instance, all images in Clash of Clans seem 3D, they are from 3D models after all, but in the end they are all just 2D renders stuffed inside a sprite sheet. Regardless of how good a fake it is, a 2D game will only ever have x and y-axes.

Well, you could technically fake a z-axis in a 2D game…” and that’s the entire point. Pseudo 3D or 2.5D games don’t have a true z-axis, they can have a faked one at best.

I think it’s pseudo once you start doing 3D projections/illusions on top of already processed 2D projections…

That’s kind of my point though. The engine I’m making has x,y,z… 3 variants of in fact. The world is an x,y,z grid and then individual objects have internal x,y,z space relative to their centres, and the camera has x,y,z space relative to itself, where z is how deep into the screen you’re looking. Programmatically in this regard, it’s ‘proper’ 3D. But beyond that it’s still just drawing 2D shapes at projected x,y coordinates on a flat screen, creating the illusion of a 3D environment. It’s not really actual 3D is it? Just maths and perspective trickery. Corona/Solar2D is after all a 2D kit.

The same could be said for 3D modelling software. Blender for example. It’s doing a far better job of it than I could but at the end of the day it’s still just taking pseudo world coordinates and projecting those to create something that looks 3D on a 2D screen. The lighting is better and it does all sorts of fancier maths to create reflections and whatnot, but at the core it’s still just maths and perspective trickery. Your screen is still x,y space.

So, take a 3D render from Blender and use that as a backdrop in a game. Take another 360 renders of a character to create a 2D sprite that can move around overlaying that backdrop, creating the illusion of a 3D scene from just two 2D image files and a bit of math and what do you have, 2D or 3D? We’d all call this fake, including me, but my line would be blurry. The grunt work was preprocessed creating an image that can just be dropped right in rather than having to do that work on the fly in-game, but either way you’re looking at a 3D environment. You’re moving a character around a world that has some perspective to it, and you’re able to turn that character around to see all sides. It’s technically just as 3D as anything else, but less resource intensive. More efficient.

It would be great to be able to control a camera x/y/z and to be able to work with .fbx or .obj files. I agree a pseudo 3D would be just fantastic without having to push the engine too far.

I do love that this thread is still active :grinning:. Definitely picking back up on this as soon as I’ve gotten the new marketplace launched. Shouldn’t be long now…

That’s good news Richard, any move to some kind of 3D and perspective views would be a great move forward for the engine.

@richard11 - even a simple 3D engine with a coordinate system that calculates how to view a 2D world with real 3D perspectives and lighting would be awesome! That should be the first step, IMHO. Until that’s working, I wouldn’t worry about more complex rendering.

1 Like

@richard11 At long last, I’ve got 3D objects up and running with hardware depth buffer support and integrated into the display hierarchy, details here. Only tested on Windows, but I’ll see about Mac in a couple days. (As for the others, I still need to learn the respective processes.)

Apart from testing and such, I might take a swing at a Vulkan backend before finally submitting the PR, just to tease out any major GL-centric assumptions it might be making, but could maybe put builds up somewhere. In any case, the code can be forked in the meantime.

Also, the 3D things I do have are pretty low-level, basically at the “building blocks” level. So it’s stuff you could build on top of in your own case, rather than a competing idea.

1 Like

Brilliant work. I’m guessing the video you made explains everything so I’ll jump on that asap - I don’t currently have enough time to sit and watch but I’m sure I’ll get chance at some point today.

I see you’ve thrown some depth stuff in there which is likely the past I’m mostly interested in. I’m definitely intrigued at least, and hopeful this is something I can make use of.

@StarCrunch I’ve watched the vid now. Living on an F1 race track must be interesting, ha.

As usual with your work, I’ve only loosely followed what you’re doing to be honest. This stuff clearly makes sense in your head but in words I was lost within minutes… probably because I don’t speak OpenGL.

I think I followed what you were trying to demonstrate with the stencil buffer though, and your 3D vector example at the end looks fantastic. I think you’re basically doing the same rendering approach as I am, but through OpenGL directly where I’m having to translate the 3D vertex locations back into 2D screen positions.

For me to implement z buffering within the renderer would mean iterating each pixel of the rendered shape, calculating its pseudo z value of that pixel based on proximity to the literal defined vertices and their own z’s, and then checking that figure against my own buffer to see if there’s already a drawn pixel with a closer z than that - punching holes in the current shape if so, and writing the new z to the buffer if not. This is where I’m going to struggle with performance.

If you’re passing the 3D positions of each vertex to OpenGL though, and OpenGL already does z buffering at a hardware level, does this mean that your shapes will just automagically inherit per-pixel checks and effectively intersecting faces will just work out of the box?

If that’s correct then this plugs the whole perfectly. If you can throw together some more Lua-side examples on how to actually use this stuff, that would be brilliant. If I’ve understood enough of this, I don’t think it would be too tricky to rework my engine to pass 3D polygons instead of translating them back into 2D, and passing the z buffer work over to OpenGL should be a performance leap.

Quite excited by this. Thanks!

@richard11 Yep, the bit about the hardware z checks is the key point.

Attached is what I was using in that second video: main.lua (6.2 KB)

I added in a quick bit with polygons too. Actually, I’ve been thinking these past few days that those were mysteriously not working, but my test case here was just at a right angle to the view direction, more or less. After some rotation, there it was! sigh

The “scoped group” and “depth state” are some of the novelties. The scoped group is a display group but also broadcasts a message to its children before and after drawing. The depth state is a consumer of those messages, that says “remember what the GL state is now” and then “restore what I remember”. The depth state is a “display” object, and when it “draws” will update the GL state according to whatever properties it has assigned.

This could be done differently, say by wiring state changes into effects instead, but it seemed ultimately more flexible to me. Time will tell, I guess.

The matrices used here aren’t strictly necessary if you’re directly calculating world positions.

The “customization” can involve a few things (the instancing sample adds a custom property), but here it only involves a few minor shader rewrites. Basically, every shader including the default is molded from this template. But several details there are specifically 2D. So some changes are applied by the transformation defined here.

These modification-only scenarios would actually lend themselves to a Lua API. I’ll look into that.

There’s a weird quirk that came up while testing meshes / display objects (the first objects where “regular” drawing was wanted, versus updating some state), in that if you disable culling and hit tests --in the 2D sense–the object never actually gets validated and thus drawn. At the moment I’m just not doing the disables, but will try some things to address this.

1 Like

Very interesting. You’re giving me an itch with this stuff…

Should be done with the new marketplace today - just polishing off a few bits before launch now. I’m absolutely definitely making time for 3D again next!

1 Like