That’s kind of my point though. The engine I’m making has x,y,z… 3 variants of in fact. The world is an x,y,z grid and then individual objects have internal x,y,z space relative to their centres, and the camera has x,y,z space relative to itself, where z is how deep into the screen you’re looking. Programmatically in this regard, it’s ‘proper’ 3D. But beyond that it’s still just drawing 2D shapes at projected x,y coordinates on a flat screen, creating the illusion of a 3D environment. It’s not really actual 3D is it? Just maths and perspective trickery. Corona/Solar2D is after all a 2D kit.
The same could be said for 3D modelling software. Blender for example. It’s doing a far better job of it than I could but at the end of the day it’s still just taking pseudo world coordinates and projecting those to create something that looks 3D on a 2D screen. The lighting is better and it does all sorts of fancier maths to create reflections and whatnot, but at the core it’s still just maths and perspective trickery. Your screen is still x,y space.
So, take a 3D render from Blender and use that as a backdrop in a game. Take another 360 renders of a character to create a 2D sprite that can move around overlaying that backdrop, creating the illusion of a 3D scene from just two 2D image files and a bit of math and what do you have, 2D or 3D? We’d all call this fake, including me, but my line would be blurry. The grunt work was preprocessed creating an image that can just be dropped right in rather than having to do that work on the fly in-game, but either way you’re looking at a 3D environment. You’re moving a character around a world that has some perspective to it, and you’re able to turn that character around to see all sides. It’s technically just as 3D as anything else, but less resource intensive. More efficient.