Tinting Transparent Pixels???

I’ve been scratching my head over this one. I have a tile engine in progress, and many of the tiles have transparent areas in them. Applying a tint to a tile only tints the non-transparent pixels. I need to apply a tint to the ENTIRE tile, including transparent pixels, in order to implement a lighting/shadow system.

At the moment I have a second array of blank newRect’s on top of every tile which I change color and alpha to achieve the results, but I’ve run into the performance limitations of the iPad 2! Tinting the pre-existing tiles would cut the number of simultaneous on-screen objects in half.

Any help would be greatly appreciated! [import]uid: 99903 topic_id: 29569 reply_id: 329569[/import]

Wouldn’t it work if the transparent area was something around 90% transparent?

Tinting of transparency is not possible, since there are nothing to tint :wink:

Joakim [import]uid: 81188 topic_id: 29569 reply_id: 118692[/import]

I tried that, but it did not achieve the desired effect. Now that I think of it, the desired effect would not be possible unless the alpha channel of those pixels changed.

At the moment I have 910 shadow rects and up to 910 world tiles on the screen at any one time, and the iPad 2 at least just doesn’t quite keep up with all of this. I’ve found bunches of ways to optimize other tasks, but this seems to be an OpenGL performance limit I’m hitting. [import]uid: 99903 topic_id: 29569 reply_id: 118702[/import]

Cant you just load the assets you currently need?

Joakim [import]uid: 81188 topic_id: 29569 reply_id: 118706[/import]

That’s just it, though. Unless I can find a way to combine the tile graphic and the shadow in a single display object, I’ll need all or most of those objects. I can’t design shaped shadows ahead of time because the world can be modified by the player. Light and shadow and so-forth will depend on the player placing torches and adding or removing walls, and has to be updated in near-realtime. [import]uid: 99903 topic_id: 29569 reply_id: 118707[/import]

Why can’t you modify the world when the user interacts? There must be a better approach and I am sure that you can sort it out :slight_smile:

J [import]uid: 81188 topic_id: 29569 reply_id: 118731[/import]

I can’t really help with the tint problem but figured I’d chime in from my own tile related research.

dyson, my experiments definitely have led me to believe there is a performance ceiling somewhere past 1k tiles depending on the platform. For some reason my performance order looks like this:

(Worst) 3GS > 4 > iPad1 > iPad 2> iPad 3 (Best)

I can’t quite explain why I seem to get better performance on the tablets. Filed a bug related to it. (as detailed in the link)

Anyway, to maintain performance it seems to be essential to use a) imagesheets and b) imageGroups. imageGroups in particular came with a pretty substantial performance boost. Beyond that my suspicion is that the only way to maintain performance is with some sort of custom paging solution (that is, loading and unloading chunks of tiles from memory as you move around the tilemap.) [import]uid: 41884 topic_id: 29569 reply_id: 118857[/import]

I put together a video to demonstrate the scale of the challenge: http://www.youtube.com/watch?v=mNMxUarjTGk It shows what I have running so far, minus the lighting system. The 910 tiles I’m talking about are not the level, they are all onscreen simultaneously (save for a few around the edges). The actual level is 1100 by 800 or 880000 tiles.

I’ve given up trying to tint every tile on the screen. Instead I will combine tinting with the independent shadow tiles I’ve been using up till now. Hopefully it’ll be enough of a compromise on the number of tiles to bring performance up to reasonable levels. As for image groups, I’d use them if they supported having members from multiple different image sheets, which I don’t think they do. I have 32 512x512 image sheets in use so far.

Out of curiosity, how much faster than the iPad 2 is the iPad 3? It’d be great if the hardware in people’s hands had improved enough to make this problem go away by the time I get around to publishing games based on this engine.
[import]uid: 99903 topic_id: 29569 reply_id: 119094[/import]

Er…wow. I have some questions if you don’t mind…

  1. Is the youtube video captured from iPad 2 or from the simulator? Any performance difference you notice?

  2. Is this a tile engine of your own design? 880,000 tiles seemed impossible to me until I saw your video. With Lime I’m seeing significant performance drops at 2k and 4k total tiles (48x48 tiles).

  3. What size of tiles? The imagesheet you mention is pretty large, of course, but I can’t tell tilesize from the video.

  4. Are you using the Corona Physics engine? Or a roll-your-own translate state machine?

To answer your question, in typical use the iPad3 is clearly faster. The CPU hardware is not really that much faster, but in order to accommodate the retina screen Apple went with a quad PowerVR processor (I think it’s the same one used by PlayStation Vita?), which for Corona purposes means you get a lot more headroom with graphics processing (eg: tiles). More pixels to process, of course, but it still feels faster to me.

I would not particularly rely on this speed becoming standard though. If the 7" iPad becomes a reality there’s a significant chance it will just be a smaller iPad2, meaning that 1024x768 screen and performance profile will get another few years of life. [import]uid: 41884 topic_id: 29569 reply_id: 119110[/import]

I completely forgot about the rumored 7" iPad… I think it’ll be a mistake on Apple’s part, personally. I’ll have to come up with some way for the user to fine-tune their own performance, sort of like the graphics settings for PC games, I think…

  1. The youtube video is captured from the simulator. I still haven’t figured out how to capture video from an iOS device, short of pointing a camera at it’s screen. The iPad 2 seems just as fast working with 910 tiles as the simulator. Performance on the device is normally excellent- not quite the full 30fps. The engine defers tasks to perform more manageable bits of them over several frames. If the player forces the engine to do too much at once, things slow down a little. It is not normally possible for the player to do this. Performance does not vary with world size; the engine does not care how large the world is, so long as it fits in the device’s memory.

  2. This is something I’ve come up with myself over the past few months. From what you’ve said, it sounds like Lime is probably creating display objects for every tile in the map, which is going to bog down quickly. My engine reuses the same 35x26 grid of display objects. No matter how large the level, there will never be more than those 910 on-screen tiles (ignoring the unfinished lighting system for the moment).

  3. The source tiles are 16x16, but the tiles in the image sheets are scaled to 32x32. No matter what I did to the original 16x16 tiles or the settings in Corona, scaling them by 2 within Corona always made them fuzzy and unpleasant. If there was a way to prevent ANY interpolation during image scaling- and there really should be- I would be able to use the original 16x16’s and just scale them in-game.

  4. The physics is done with simple translations. [import]uid: 99903 topic_id: 29569 reply_id: 119120[/import]

Interesting.

  1. Yes, Lime is using nested display objects, but I guess it’s just not clear what you mean by the 35x26 grid. I mean, regardless of method you can only show as many tiles as you have screen space. Do you mean that your engine only keeps tiles in memory that are on-screen (basically your own culling solution)?

Corona Labs claims their imageGroups system has some sort of culling advancements going on, might explain the performance boost but still couldn’t get more than 4k tiles in memory without framerate dropping heavily. (I’m building a slow moving RPG, and I’m seeing the performance hit with just the tiles and a moving character. No other sprites or any audio/networking of any kind.)

  1. There are two factors in the unpleasantness you describe.

a. OpenGL blending precision errors, which causes a weird translucent outline around tiles. This can be dealt with by setting the imagesheet option border to 1 and then creating a pixel duplicated 1px border around every tile in the sheet. Unless you are using TexturePacker this is pretty tedious, but it works.

b. Some sort of free trilinear filtering, causing the interior of scaled images to blur a bit. I tried asking Corona Labs about this in another thread but never received a reply. The only way to get true nearest-neighbor scaling is to provide the @2x art yourself, sadly. Which I agree is unfortunate; in my case I want to use 16x16 tiles but have to provide 48x48 and 96x96 tiles to eliminate the filtering.

[import]uid: 41884 topic_id: 29569 reply_id: 119125[/import]

Impressive video :slight_smile:

@ dyson122: So, what you’re saying is you have a constant amount of DisplayGroups covering your on screen area and you just dynamically populate these display groups with Sprites at run time? [import]uid: 80469 topic_id: 29569 reply_id: 119142[/import]

And here’s that device test video I mentioned.
http://www.youtube.com/watch?v=-oEPuqDZWWE [import]uid: 99903 topic_id: 29569 reply_id: 119154[/import]

So - for tinting - can you just use a translucent image covering the whole screen that tints it in the way you want? [import]uid: 160496 topic_id: 29569 reply_id: 119155[/import]

dyson122: Wow, that vid is pretty incredible. Pretty much seals that I need to write a tile engine (unless yours is available heh) Thanks for the details on your array approach. I can see how that would work. The problem for me will probably how to move the layer according to a player, sync up multiple layers, etc. but should be an interesting quest.

Moreover, gonna be a lot of work to get something even close to Lime working but the performance will be worth it.

mike470: That’s not a bad idea, if he needed everything to tint the same way. But if it’s a specific tile by tile basis it could be difficult to make it work right. [import]uid: 41884 topic_id: 29569 reply_id: 119163[/import]

richard9/dyson122 - ok, then, since, as someone pointed out already, transparent means invisible :slight_smile: - why not make the pixels that are currently transparent translucent instead, slightly, and tint them?

In general, I find that tinting is very limited anyway. It works best if you have white or gray templates that you are tinting, or if you use tinting to reduce a multi-color image’s brightness. Color-tinting effects on multi-color images are weird and not very useful.

What would be nice is being able to specify the shade that you want to tint :slight_smile: Like “change all yellows to purples”… [import]uid: 160496 topic_id: 29569 reply_id: 119167[/import]

Yes, I use a custom culling solution.

The problem I see isn’t so much about fitting more or less tiles on a screen, but having to perform operations on all the tiles in the world. When you make 4K display objects as tiles, and the player moves across a scrollable level, all of those tiles have to be moving through the screen space. Logically, the CPU has to access and modify the positions of 4K tiles, whether they are visible on the screen or not. If you increase the world size to 8K, the CPU has to do twice as much. This is only made worse by the large performance cost of accessing object properties. If the game is using physics to check the player for collisions with certain tiles, the problem gets much worse, and again the workload goes up as the number of display objects does.

My solution is to keep track of the player’s location in the world, and to store all of the world’s information in multiple gigantic two-dimensional arrays. When the player moves right, the 35x26 tiles move left. When the leftmost tiles move off the screen, they are deleted and a column of new tiles is created along the right. The game counts forwards along the gigantic world arrays from the player’s stored location in order to get the relevant x and y indexes, so that it can figure out what those tiles are supposed to be and load their graphics. And that is it; the display objects are involved in no other data storage or operations what-so-ever.

Collision detection, rather than accessing the properties of the display object tiles, which is expensive, compares the player’s location to the world array information to either side of that location. This is nice and fast, it doesn’t access display objects, and it doesn’t rely on the complex mysterious maths and multiple iterations of the physics engine- it’s all simple math.

So if playerLocX = 100 and playerLocY = 100, and the player is running right, all the game has to ask is “does world[playerLocX + 1][playerLocY] = obstacle?” It doesn’t bother checking with what OpenGL happens to have on the screen, or the properties of any of those things.

I’m not saying Lime does any of these things or doesn’t. I’ve never used Lime. But I think these things are the major causes of the poor performance we always run up against.

EDIT: The new youtube video is still processing, so it may be a few minutes. EDIT2: Youtube ate the video. Uploading a new one.

I have a constant amount of imageRects covering the display area and dynamically update them, yes. I’m not using any groups at all, at the moment. I suppose I should start using them. I can’t use image groups, but do the old display groups confer any notable performance gain? [import]uid: 99903 topic_id: 29569 reply_id: 119143[/import]

Each block will have it’s own light level, so a single large translucent rect wouldn’t work. I probably will use one to darken the whole of the background, but that is a relatively minor effect next to individually lighting the tiles.

I tried making the transparent pixels less transparent, but it just didn’t give the right kind of effect. It makes sense now that I think about it, but I was grasping at straws at the time. Anyway, I’ve just about settled on a solution. Originally I had wanted the tiled lighting to extend into the the empty spaces in order to cover the backgrounds, but I’ve decided this is unnecessary. The backgrounds are far in the distance, so placing a torch would not be expected to have any impact on them. Suddenly, the whole problem pretty much evaporates: I can tint the images, and the transparent edges become desirable instead of undesirable. Colored lighting is also in the works.

Selling the engine is something I might consider doing if it was in a more complete, user-friendly state and there was some way to protect my intellectual property. As things are, I don’t really have time to deal with all of that.

Anyway, thanks for all the suggestions. It’s always useful to bounce ideas around and get an outside perspective. [import]uid: 99903 topic_id: 29569 reply_id: 119240[/import]

I’m working on making my engine user friendly enough to sell, for anyone still interested or following this thread. I’m collecting feedback and feature requests and Tiled maps for testing purposes here: https://developer.coronalabs.com/forum/2012/10/29/million-tile-engine-any-interest-feature-requests [import]uid: 99903 topic_id: 29569 reply_id: 129079[/import]

I’m working on making my engine user friendly enough to sell, for anyone still interested or following this thread. I’m collecting feedback and feature requests and Tiled maps for testing purposes here: https://developer.coronalabs.com/forum/2012/10/29/million-tile-engine-any-interest-feature-requests [import]uid: 99903 topic_id: 29569 reply_id: 129079[/import]