From The Blog: Corona Geek #181 – Creating Destroyable Game Environments

Corona GeekOn today’s Corona Geek, Ed Maurina answered a recent forum question on how to create destroyable game environments. Ed demonstrated three possible solutions with varying degrees of sophistication. The panelist also had some great ideas to share around using shaders, and even excluding use of  the physics engine all together.

Download the source code for today’s demo.

Ed Maurina is getting ready to close the door on beta testers for his new tool to automatically generate common types of starter game templates. Don’t get left out, sign up now to be a beta tester and help shape a development tool that will benefit the whole community.

Show Notes:

Panel Guests:

Promote Your Apps:

Let’s Meet Face-to-Face:

Thank you for watching, listening, and following Corona Geek:

View the full article

You talked about this kind of technique in the end of the video: masking away the landscape where the bullets impacts. I created a short video to show this: https://youtu.be/Yj0ku1YzV7k

The main principle is only a few lines of code:

local snapshotMask = display.newSnapshot(width, height) local hills = display.newImageRect(snapshotMask.group, "img/hills.png", 1203, 562 ) local damage = display.newCircle(0, 0, 100) damage.blendMode = "dstOut" snapshotMask.group:insert( damage )
  1. Create a snapshot

  2. Add the landscape image to the snapshot.group

  3. Add cutouts by adding further displayObjects to the but with Port Duff blendMode set to “dstOut”

In the example I put step 3 into a function called Damage(x,y) that gets called by a mouse click event and delivers the mouse position to the x,y parameters of that function.

Don’t forget to call snapshotMask:invalidate() to update the snapshot like this:

local function Damage( screenX, screenY ) local posX = screenX - \_W2 local posY = screenY - \_H2 local damage = display.newCircle(posX, posY, bulletSize) damage.blendMode = "dstOut" snapshotMask.group:insert( damage ) snapshotMask:invalidate() end

But this might be a oneway approach if it comes to collision detection. Only idea of a collision system I have would be using a collision mask. Which means we need to check pixel color values in a game loop. But as far as I have read the colorSample function is too slow. Any other experience or ideas?

I suspect graphically this is by and large what Worms did. (I’m more doubtful with Scorched Earth.) On the collision side of things I imagine they were maintaining a quadtree or k-d tree, or something like them. (This information might very well be out there.) This would allow for sparsity (back in an era of almost unthinkable RAM budgets  :D), provide for collision with fairly simple response needs, and also permit the AI to do “speculative” collision (line-of-sight tests, checking trajectories for a clear shot, area effects, etc.), while keeping the search neighborhood manageable. But this is all a guess.

As far as “other experience”, my own (unfortunately probably cryptic) response on the show was based on a sort-of-related technique I’d developed, either before the Porter-Duff stuff hit or at least when Pro wasn’t yet free (and certainly before shaders landed, to say nothing of the recent texture canvas resource). The basic idea was to assign each tile an 16-bit pattern, with those bits indicating which parts of the tile were visible. I baked these, in the form of 4x4 black-or-white blocks (one block per bit), into a mask assigned to every tile, centering it in each case to align with the appropriate “mask tile” at the moment.

4x4 was the largest square power-of-2 shape that was sane: there are 2^16 - 1 (just shy of 64K) possibilities, but by eliminating isolated blocks or 1-block-thick filaments (something in this spirit), it can be reduced to 235 or thereabouts. The offending tiles are simply rehashed to the nearest match, i.e. with any stray block / filament removed.

(The implementation is a bit scattered, but let me know if you’d care to know more.)

64K also happens to be the guaranteed integer range in vertex shaders, so I have since translated the above to a shader technique as well (the reduction step is no longer necessary but could be kept for aesthetics), which I can drop on a tile’s fill instead. (The mask approach is a dog to maintain, so I might just retire it.) That range also means that I can flag the 16 neighbors, which allows for e.g. doing a soft fade on edge pixels. This is still a bit unruly, so at some point I ought to throw a friendlier interface on top of it.

I maintained collision slightly differently (array of booleans), but the information content was basically the same.

That said, I don’t know that this is better than the blend mode approach, if that suffices. As I said, this was implemented when that wasn’t conveniently at hand. (Uff, I’d be sad if all the work was for nothing…) I haven’t explored how well the latter allows “repair”, say if you want to restore parts that have been masked out, which was an original need of mine. It might very well be trivial.

Thank you StarCrunch. Good thoughts. I again thought about the whole “Worms game concept” and came to the conclusion, that Worms doesn’t need real time collision checking. The 2D Worms titles are a round based type of game.

So I played around with my code and found out: using a collision mask and colorSample() is not fast enough for real time but IT IS fast enough for Worms like games.

Don’t pin me down to this. This example is only a fast proof of concept. But the result looks promising.

I made a new video. In the example I can destroy the landscape with cannon balls. When I click the mouse button, I first create a collision mask. An image where black color means “nothing here” and any other color means “here is land to collide with”. 

Then I calculate the trajectory path.

Then I go along this trajectory path and check if the color changes in the collision mask with the colorSample() function.

If the color changes - bam - I got the point of impact.

So already before the cannon ball starts to fly, I know the exact point of impact.

Then I let the cannon ball fly (with the exact same formula as I created the trajectory path above).

When the cannon ball reaches the impact point, I add the damage to the landscape.

Here is the video:

[media]https://www.youtube.com/watch?v=CT5rg7woPXo[/media]

Oh, and in the end I can even repair the landscape – sorry StarCrunch :slight_smile:

What do you think?

This is pretty cool!

–SonicX278

Nothing to be “sorry” about. It’s great to see.  :slight_smile:

If you have the chance you should come on and describe it. (I think Barry’s got something. We’ll see if I manage anything.)

This is about what I expected. I don’t think it’s even that the call itself is so expensive–though it needs to fetch from the GPU–so much as that it must wait until rendering has happened first. In theory that means several such calls could be coalesced and one (or at most only a few) glReadPixels() done at end of frame, but who knows.

Actually, in case you could do anything with it, I threw out kind of a similar idea in an older post (after the quote block). I’d say a bullet hell is about as “real time” as it gets, but maybe there’s a turn-based application I’m not seeing?

@starcrunch - worms and lemmings etc likely just kept a matching binary version of the graphics (a simple black and white, just think of it as a mask) in ram, and looked in there for whether something was solid or not. If you go back further, IE before graphics cards, they’d have likely just used the actual graphics, since it was just another set of data in normal ram.

In fact I am now wishing for a simple straight memory allocation in Corona to be able to do it, since the alternative is a huge table (you can compress by having 1 value be multiple pixels using bit operations), and I’m not sure how that’d effect memory and look-up speed.

I’m inclined to give it a try though :slight_smile:

Howdy!

I started to wonder about this same subject about a week ago, and it looks like I’m coming around at a good time.  :slight_smile:

I watched the video, good stuff! It’s a different approach than what I’ve tried so far, but after much toil I basically came down to 2 scenarios…… I would either draw polygons and apply its shape as a physics body and fill it with graphics from image file, or draw graphics from image file and apply a certain physics body shape to it. What’s the difference? I guess the point of reference?  :wacko:

Anyhow, I’ve been eager to get this to work in Corona, and works almost perfect for my needs using graphics.newOutline(). My problem (so far) is that this function only reads an actual image file, not a display object.

Any possible way to get this function to read an image already loaded in memory, say from a display object, such as a snapshot?

My idea is basically to use an image as reference for creating the physics body using graphics.newOutline(), modify the image while in memory and read (without saving it on disk) the changes to create another physics body. 

Hi Juni!

sorry, since the newOutline() method needs a file path to the image I do not see a way to use a display object with it :frowning:

You talked about this kind of technique in the end of the video: masking away the landscape where the bullets impacts. I created a short video to show this: https://youtu.be/Yj0ku1YzV7k

The main principle is only a few lines of code:

local snapshotMask = display.newSnapshot(width, height) local hills = display.newImageRect(snapshotMask.group, "img/hills.png", 1203, 562 ) local damage = display.newCircle(0, 0, 100) damage.blendMode = "dstOut" snapshotMask.group:insert( damage )
  1. Create a snapshot

  2. Add the landscape image to the snapshot.group

  3. Add cutouts by adding further displayObjects to the but with Port Duff blendMode set to “dstOut”

In the example I put step 3 into a function called Damage(x,y) that gets called by a mouse click event and delivers the mouse position to the x,y parameters of that function.

Don’t forget to call snapshotMask:invalidate() to update the snapshot like this:

local function Damage( screenX, screenY ) local posX = screenX - \_W2 local posY = screenY - \_H2 local damage = display.newCircle(posX, posY, bulletSize) damage.blendMode = "dstOut" snapshotMask.group:insert( damage ) snapshotMask:invalidate() end

But this might be a oneway approach if it comes to collision detection. Only idea of a collision system I have would be using a collision mask. Which means we need to check pixel color values in a game loop. But as far as I have read the colorSample function is too slow. Any other experience or ideas?

I suspect graphically this is by and large what Worms did. (I’m more doubtful with Scorched Earth.) On the collision side of things I imagine they were maintaining a quadtree or k-d tree, or something like them. (This information might very well be out there.) This would allow for sparsity (back in an era of almost unthinkable RAM budgets  :D), provide for collision with fairly simple response needs, and also permit the AI to do “speculative” collision (line-of-sight tests, checking trajectories for a clear shot, area effects, etc.), while keeping the search neighborhood manageable. But this is all a guess.

As far as “other experience”, my own (unfortunately probably cryptic) response on the show was based on a sort-of-related technique I’d developed, either before the Porter-Duff stuff hit or at least when Pro wasn’t yet free (and certainly before shaders landed, to say nothing of the recent texture canvas resource). The basic idea was to assign each tile an 16-bit pattern, with those bits indicating which parts of the tile were visible. I baked these, in the form of 4x4 black-or-white blocks (one block per bit), into a mask assigned to every tile, centering it in each case to align with the appropriate “mask tile” at the moment.

4x4 was the largest square power-of-2 shape that was sane: there are 2^16 - 1 (just shy of 64K) possibilities, but by eliminating isolated blocks or 1-block-thick filaments (something in this spirit), it can be reduced to 235 or thereabouts. The offending tiles are simply rehashed to the nearest match, i.e. with any stray block / filament removed.

(The implementation is a bit scattered, but let me know if you’d care to know more.)

64K also happens to be the guaranteed integer range in vertex shaders, so I have since translated the above to a shader technique as well (the reduction step is no longer necessary but could be kept for aesthetics), which I can drop on a tile’s fill instead. (The mask approach is a dog to maintain, so I might just retire it.) That range also means that I can flag the 16 neighbors, which allows for e.g. doing a soft fade on edge pixels. This is still a bit unruly, so at some point I ought to throw a friendlier interface on top of it.

I maintained collision slightly differently (array of booleans), but the information content was basically the same.

That said, I don’t know that this is better than the blend mode approach, if that suffices. As I said, this was implemented when that wasn’t conveniently at hand. (Uff, I’d be sad if all the work was for nothing…) I haven’t explored how well the latter allows “repair”, say if you want to restore parts that have been masked out, which was an original need of mine. It might very well be trivial.

Thank you StarCrunch. Good thoughts. I again thought about the whole “Worms game concept” and came to the conclusion, that Worms doesn’t need real time collision checking. The 2D Worms titles are a round based type of game.

So I played around with my code and found out: using a collision mask and colorSample() is not fast enough for real time but IT IS fast enough for Worms like games.

Don’t pin me down to this. This example is only a fast proof of concept. But the result looks promising.

I made a new video. In the example I can destroy the landscape with cannon balls. When I click the mouse button, I first create a collision mask. An image where black color means “nothing here” and any other color means “here is land to collide with”. 

Then I calculate the trajectory path.

Then I go along this trajectory path and check if the color changes in the collision mask with the colorSample() function.

If the color changes - bam - I got the point of impact.

So already before the cannon ball starts to fly, I know the exact point of impact.

Then I let the cannon ball fly (with the exact same formula as I created the trajectory path above).

When the cannon ball reaches the impact point, I add the damage to the landscape.

Here is the video:

[media]https://www.youtube.com/watch?v=CT5rg7woPXo[/media]

Oh, and in the end I can even repair the landscape – sorry StarCrunch :slight_smile:

What do you think?

This is pretty cool!

–SonicX278

Nothing to be “sorry” about. It’s great to see.  :slight_smile:

If you have the chance you should come on and describe it. (I think Barry’s got something. We’ll see if I manage anything.)

This is about what I expected. I don’t think it’s even that the call itself is so expensive–though it needs to fetch from the GPU–so much as that it must wait until rendering has happened first. In theory that means several such calls could be coalesced and one (or at most only a few) glReadPixels() done at end of frame, but who knows.

Actually, in case you could do anything with it, I threw out kind of a similar idea in an older post (after the quote block). I’d say a bullet hell is about as “real time” as it gets, but maybe there’s a turn-based application I’m not seeing?

@starcrunch - worms and lemmings etc likely just kept a matching binary version of the graphics (a simple black and white, just think of it as a mask) in ram, and looked in there for whether something was solid or not. If you go back further, IE before graphics cards, they’d have likely just used the actual graphics, since it was just another set of data in normal ram.

In fact I am now wishing for a simple straight memory allocation in Corona to be able to do it, since the alternative is a huge table (you can compress by having 1 value be multiple pixels using bit operations), and I’m not sure how that’d effect memory and look-up speed.

I’m inclined to give it a try though :slight_smile:

Howdy!

I started to wonder about this same subject about a week ago, and it looks like I’m coming around at a good time.  :slight_smile:

I watched the video, good stuff! It’s a different approach than what I’ve tried so far, but after much toil I basically came down to 2 scenarios…… I would either draw polygons and apply its shape as a physics body and fill it with graphics from image file, or draw graphics from image file and apply a certain physics body shape to it. What’s the difference? I guess the point of reference?  :wacko:

Anyhow, I’ve been eager to get this to work in Corona, and works almost perfect for my needs using graphics.newOutline(). My problem (so far) is that this function only reads an actual image file, not a display object.

Any possible way to get this function to read an image already loaded in memory, say from a display object, such as a snapshot?

My idea is basically to use an image as reference for creating the physics body using graphics.newOutline(), modify the image while in memory and read (without saving it on disk) the changes to create another physics body. 

Hi Juni!

sorry, since the newOutline() method needs a file path to the image I do not see a way to use a display object with it :frowning: