I believe image culling helps performance by reducing fill rate; that is, the hardware doesn’t spend any time rendering pixels on textures that have been culled. You are correct in that if you have a scene with a bunch of textures that are off screen and culled they still take up memory. They are just not being rendered.
If your giant 2048x2048 texture is being rendered full size on, say, a 320x480 screen, all the off screen pixels must still go through the openGL texture rendering pipeline on the device which will consume hardware resources. If you broke your giant texture up into smaller squares and stitched them back together then there could be a performance boost since any of the squares that are completely off screen would be culled and thus wouldn’t go through the hardware rendering process.
On the other hand, there’s a potential performance hit for increasing the number of texture objects. So if you did something crazy like break up your background into 2048x2048 little one-pixel textures you would see your performance plummet since the renderer (and the culling routine) have to sort through half a million texture objects each frame. So there is a balance between optimizing for fill rate vs. optimizing for number of texture objects that is dependent on the application you are designing as well as the hardware you are running it on.
Now, in your particular example, if all you have is a single large background and a few physics bodies on top of it it might not be worth the trouble to cut up and stitch together your background to take advantage of culling since I doubt you are running into any rendering bottlenecks with such a simple scene. But if you are expecting to run the game on older devices that don’t support 2048x2048 textures you should at least break up the background into 4 1024x1024 textures. And if you’re going to do that you might as well break it up into 16 512x512 textures so culling would come into play and you might see an actual performance boost on older devices. [import]uid: 9422 topic_id: 30208 reply_id: 121015[/import]