Suggestions for tutorials / demos for G2?

I’m hopefully going to have the time over the next few weeks to start churning out simple demos, likely with source, or possibly even tutorials for various things. I’m just curious as to what interests people? So far my meager supply of ideas is as follows:

  • General 3D (how to manipulate something on screen. The pros and cons of 3D in corona etc).

  • Mode 7 (already have zee demo!)

  • Snapshots / Double buffering and uses (create your own filters and/or art packages)

  • Lighting effects (although I need to play with the filters etc to check if it does what I want)

  • Explanations of how I create the 3D in Retro Racer and Dungeoneer, and optimisations / shortcuts.

  • Skyboxes

Told you it was meager…

So - is there anything obvious that people are crying out for? Bear in mind it doesn’t mean I’d do them, as my own field of interests isn’t particularly broad (mostly higher-end graphical shennanigans), but it certainly can’t hurt to receive suggestions and feedback.

It would be great if you or corona employees put together all-in-one demo for graphics 2.0

The app should allow the user to select various effects (filter, 2D effects etc) and any customizable properties of that effect.

That way developers can understand the possibilities and the effect of tweaking related parameters.

At this point I have to run 1 demo app for each effect and tweak code manually and can be difficult to combine effects to see what happens. 

So this can become interactive documentation tool to understand new capabilities.

Thanks for taking the lead to develop a demo.

Vasant

I think someone has already come up with a demo for the filters and their properties, no?

Here: http://forums.coronalabs.com/topic/40453-filter-browser-for-graphics-20/

or here: http://forums.coronalabs.com/topic/40390-g2-filter-helper-module/

It would be great if you or corona employees put together all-in-one demo for graphics 2.0

The app should allow the user to select various effects (filter, 2D effects etc) and any customizable properties of that effect.

That way developers can understand the possibilities and the effect of tweaking related parameters.

At this point I have to run 1 demo app for each effect and tweak code manually and can be difficult to combine effects to see what happens. 

So this can become interactive documentation tool to understand new capabilities.

Thanks for taking the lead to develop a demo.

Vasant

I think someone has already come up with a demo for the filters and their properties, no?

Here: http://forums.coronalabs.com/topic/40453-filter-browser-for-graphics-20/

or here: http://forums.coronalabs.com/topic/40390-g2-filter-helper-module/

inventory for point and click would be interesting

Sorry I should have been more specific - for graphics 2.0 specific features.

Actually Corona have been churning out plenty of tutorials themselves, so my possibilities have narrowed dramatically over the last few weeks, but that is only a good thing :slight_smile:

  • 1 General 3D : I would love to see the tricky parts in performance and logic you have figured out. 

Hi.

I have a sample (in 1.0) where I do some stuff and take a capture. Then I modify that a bit (basically the capture is now just another object in the scene), move some stuff around, and capture again. And so on. Sort of a poor man’s accumulation buffer.

It gives a nice full-scene slow-mo effect, at least on the simulator. (Alas, it runs like a sick dog on my Nexus One. I’ve never remembered to try it when I had something else on hand.)

I suspect a couple snapshots could be ping-ponged, feeding one into some filter and then swapping roles on the next pass, to achieve the same effect.

This is probably pretty easy, but I’m still starter for the time being, so I haven’t tested the idea myself. (Not TOO long, I hope. I mean to go to town on things procedural…)

For reference, the accumulator part is in this timer (everything else is the graphics fluff):

https://github.com/ggcrunchy/corona-sdk-snippets/blob/master/samples/SlowMo.lua#L92

I’m in talks with Brent now about a couple of tutorials - a mode 7 one and the snapshots Part 2 (the first half of my link above was used for the first snapshot tutorial, but I’m itching to do a part 2 which is more in-depth and concentrates exclusively on the double buffering method I describe).

You are certainly on the right track regarding what you want to do, but do be aware that double buffering is one of those things that you *must* be able to recreate yourself at will, as the current canvas etc model breaks down when an app sleeps and the VRAM gets freed up. If you are doing a dynamic effect (such as in-game motion blur) then this will automatically recreate itself after a few frames, but you can’t use double buffering to set up a scene and just assume that it will store the snapshot correctly between suspending / sleeping and resuming.

What will help with that particular case, and it is on the roadmap but without a timescale, is the ability to save a snapshot’s image to a file (so you can reload it as needed).

Ah, I read “double buffering” earlier and my mind just lumped that into the technique for reducing tearing (with some puzzlement about how that even applies  :) ), so I hadn’t realized this was already in your list. Sorry for the noise!

I have more than a passing acquaintance with context loss, unfortunately! (In other platforms, though.) In the particular scenario I mentioned, it’s probably easy enough to fall back to capture-to-file and then serve that right back up.

Anyhow, if you (or anybody else) are still looking for ideas, a few more spring to mind. Apologies if they’ve been done!

  • Reflective surfaces, e.g. mirrors and water.

  • Outlines / halos / etc. around objects. (Would it be too weird to call them coronas?) Bright lamps, unit selections, etc.

  • Snapshots, and / or certain blend modes (“dst” or “dstIn”, say, not sure which does what), seem fit for things like heat hazes or a Predator effect. I’ve done the latter using similar mechanisms (in Unity + HLSL).

Oooooh glows and stuff I hadn’t even thought of. There’s likely to be a way of using filters or something to get a high pass filter (IE have an image and end up only with the brightest parts of it) that you could use as a basis for coronas etc. Good idea :slight_smile:

inventory for point and click would be interesting

Sorry I should have been more specific - for graphics 2.0 specific features.

Actually Corona have been churning out plenty of tutorials themselves, so my possibilities have narrowed dramatically over the last few weeks, but that is only a good thing :slight_smile:

  • 1 General 3D : I would love to see the tricky parts in performance and logic you have figured out. 

Hi.

I have a sample (in 1.0) where I do some stuff and take a capture. Then I modify that a bit (basically the capture is now just another object in the scene), move some stuff around, and capture again. And so on. Sort of a poor man’s accumulation buffer.

It gives a nice full-scene slow-mo effect, at least on the simulator. (Alas, it runs like a sick dog on my Nexus One. I’ve never remembered to try it when I had something else on hand.)

I suspect a couple snapshots could be ping-ponged, feeding one into some filter and then swapping roles on the next pass, to achieve the same effect.

This is probably pretty easy, but I’m still starter for the time being, so I haven’t tested the idea myself. (Not TOO long, I hope. I mean to go to town on things procedural…)

For reference, the accumulator part is in this timer (everything else is the graphics fluff):

https://github.com/ggcrunchy/corona-sdk-snippets/blob/master/samples/SlowMo.lua#L92

I’m in talks with Brent now about a couple of tutorials - a mode 7 one and the snapshots Part 2 (the first half of my link above was used for the first snapshot tutorial, but I’m itching to do a part 2 which is more in-depth and concentrates exclusively on the double buffering method I describe).

You are certainly on the right track regarding what you want to do, but do be aware that double buffering is one of those things that you *must* be able to recreate yourself at will, as the current canvas etc model breaks down when an app sleeps and the VRAM gets freed up. If you are doing a dynamic effect (such as in-game motion blur) then this will automatically recreate itself after a few frames, but you can’t use double buffering to set up a scene and just assume that it will store the snapshot correctly between suspending / sleeping and resuming.

What will help with that particular case, and it is on the roadmap but without a timescale, is the ability to save a snapshot’s image to a file (so you can reload it as needed).

Ah, I read “double buffering” earlier and my mind just lumped that into the technique for reducing tearing (with some puzzlement about how that even applies  :) ), so I hadn’t realized this was already in your list. Sorry for the noise!

I have more than a passing acquaintance with context loss, unfortunately! (In other platforms, though.) In the particular scenario I mentioned, it’s probably easy enough to fall back to capture-to-file and then serve that right back up.

Anyhow, if you (or anybody else) are still looking for ideas, a few more spring to mind. Apologies if they’ve been done!

  • Reflective surfaces, e.g. mirrors and water.

  • Outlines / halos / etc. around objects. (Would it be too weird to call them coronas?) Bright lamps, unit selections, etc.

  • Snapshots, and / or certain blend modes (“dst” or “dstIn”, say, not sure which does what), seem fit for things like heat hazes or a Predator effect. I’ve done the latter using similar mechanisms (in Unity + HLSL).

Oooooh glows and stuff I hadn’t even thought of. There’s likely to be a way of using filters or something to get a high pass filter (IE have an image and end up only with the brightest parts of it) that you could use as a basis for coronas etc. Good idea :slight_smile: