Hi everyone,
I’ve released plugin.onnxruntime, a native C plugin that brings https://onnxruntime.ai/ inference to Solar2D. Load and run any ONNX model directly in your app — style transfer, image classification, text-to-speech, or any other ML model exported to ONNX format.
GitHub: GitHub - labolado/solar2d-plugin-onnxruntime · GitHub
Features
- Cross-platform: iOS (with CoreML acceleration), Android, macOS Simulator, Windows Simulator*
- Simple API: ort.load(path) → session:run(inputs) → done
- Flexible I/O: Lua tables or binary strings for tensor data, supports float32 and int64
*Windows Simulator build is included but has not been tested on a real Windows machine. Feedback welcome.
Quick Example
local ort = require("plugin.onnxruntime")
local session = ort.load(system.pathForFile("model.onnx",
system.ResourceDirectory))
local output = session:run({
input = { dims = {1, 3, 224, 224}, data = imagePixels }
})
print(output.output1.data[1])
session:close()
Demo App
The included example demonstrates:
- Style Transfer (Candy / Mosaic) — transform photos using neural style models
- Text-to-Speech — synthesize speech from phoneme tokens
Tested on iPhone 17 Pro Max (iOS), Huawei ANA-AN00 (Android), and
macOS Simulator.
