Experimental daily build: renderer features

I had several pull requests pending, some of them quite significant, and @vlads and I finally got around to getting them together. Since there’s a lot going on, in particular on the rendering path, we agreed to make it as an “experimental” build and not release it as the most recent version, although it should be up-to-date otherwise: experimental build.

The installers are available (zipped) as Simulator-macOS and Simulator-Windows.

A few people have tried it, having built from source. I’d appreciate anybody else willing to give it a try and report any issues that arise. I’ll give a summary below of what the build adds, but mostly just want to know that it doesn’t upset any current behavior. :smiley:

Here are the tests used in the GIFs that follow: STUFF.zip (2.6 MB)

There is also some very rough documentation included.

The plugin source is here.

Capture Texture (original PR)


These let you add an “event” to the display hierarchy, just like you would a normal object. You can add it to groups, move it to the front and back, etc. When it “happens”, everything in its rectangular region is captured, and you can use the results in another display object, as input to a shader, etc.

In the GIF above I’m capturing the part in the green rect. As you can see, it gets fed to other objects. There are also captures on top of that: you see, for instance, the green outline show up in another result.

On Discord @Kan98 showed some impressive results he got with it.

Vulkan backend (original PR)

This provides an opt-in Vulkan backend on Windows. (A few other platforms could follow, later.)

I ran it through all the examples back when it was submitted and it was basically seemed to agree with the GL backend in all but a couple shaders (minor differences in the mosaic-y one, basically).

It should have feature parity with everything before this PR, but doesn’t yet recognize capture textures or the “custom objects” ones that follow.

This was originally done with the idea that we might be able to also use MoltenVK on the Apple side, when the issue of Metal adoption first came up. That was done another way, but this came out of it anyhow. :slight_smile:

Custom objects

There are concerns with timing that arise when trying to do fancy graphics that have held back some plugin ideas. Think of the “always on top” issues with certain native controls. Basically, you can’t add something into the display hierarchy, so you can’t order it well.

The custom object machinery lets us create variants of the built-in display objects, but augmenting or suppressing some of their events, e.g. drawing or hit tests. We can, for instance, create “event” objects like we did for captures, or track more information with a group.

Apart from that, a lot of Solar’s render architecture is expanded: the underlying commands, shader events, etc.

I made some test plugins for this stuff. These aren’t really whiz-bang impressive-looking, but were meant to kick the tires of the features, anyhow.

Test (and plugin) #1 is pretty boring, and just tests the custom objects machinery. It “draws” by printing (a lot!) to the console. (original PR)

Test (and plugin) #2 demonstrated custom commands. (original PR)

When an object “draws” here, it emits some commands that, when processed, do actual OpenGL calls, in particular masking out a color channel:


Test (and plugin) #3 tests a few things: effect data types, shell transforms, and extra uniforms. (original PR)

An effect data type is something you will only see on the C++ side. It lets you intercept some Solar “effect data” commands and modify them. In this test, for instance, we only have one object, but we can fluff it up into three and inject an index to differentiate them:


A shell transform lets us make adjustments to Solar’s shader shell, so that our kernel can use new values or change values’ types, such as our index here.

The other two tests with that same plugin are variants on a Hermite curve:


The points and tangents have adjacency, so that given an index we can look them up in a list in the shader, rather than pass them all in per segment of the curve. This depends on exposing the swath of uniforms that Solar leaves on the floor at the moment, and some shader events are used to synchronize them.

Test (and plugin) #4 tests the addition of z-coordinates. (original PR)

A model is loaded and spun around. 2D objects are also in the same scene and hierarchy.


This is using the same ideas as before for color masks, but messing with the depth buffer instead. Also, the shell is transformed to open up the z-values. (The vertex format already provided them, but didn’t expose them.)

Test #5 continues with the previous plugin. (original PR)


Here, we add custom members to our object’s vertices. Normally, each Solar vertex has position, texture, color, and vertex userdata. However, here the 3D model is also given normals.

It also performs hardware instancing, when available. (Probably not, on mobile.)

There are also 2D examples of the same, though they mostly just emulate Solar features. (Couldn’t think of anything else. :smiley: )

Transparent Windows (original PR)

On Mac your application can have a transparent background.

Memory Policy (original PR)

This is also plugin machinery. It grew out of a policy I was using in my own plugins, but with some improvements. It’s basically a way for plugins to talk to each other when they need to serve or use binary data.

I mentioned here that Nuklear was pretty slow using vanilla display objects for all the little pieces. In this PR I allowed you to draw from a memory buffer, and as this build was coming together I integrated the memory policy.

My private copy of the Nuklear plugin is quite smooth now. :smiley:

Also, I did some bug- and leak-hunting.


Very cool stuff! I don’t think I’d personally use these, but it’s great to see the community still has some life to it and coming up with cool stuff like this :slight_smile: Hope you keep it up and continue using Solar2D!

1 Like