display.newLine - what is maximum number of vertices?

I’m making some really long lines and display.newLine seems to stop working somewhere between 975 and 1,070 vertices (it’s inconsistent).  Nothing else is effected but the line fails to generate.  Is there a hard limit to the number of vertices that can be generated with display.newLine?

Mac OS 10.12.3

Corona v2017.3040

also tested on iOS devices

Are you creating 1k individual lines or one really really long line?  If the latter are you adding the vertices on creation or by appending them?

If there is a hard limit (Corona would need to confirm) I wouldn’t expect it to change.  It maybe more a memory limitation if it keeps changing?

Good questions.  It’s one long line, initially created as a single short line and then appended hundreds of times.

The vertices are in one long table with the odds being x coordinates and the evens being y coordinates

I misspoke in my original post, 975 to 1070 is the number of values in the table, not vertices (divide by 2 for vertices).

[lua]

local function createOutline( vertices)

    local v = vertices –  table with about 1,000 values

    

    local outline = display.newLine( v[1], v[2], v[3], v[4] )  – first 2 vertices

    for i=5,#v - 3,2 do  – start at the 3rd set of vertices and count by 2

        outline:append( v[i], v[i+1] )

    end

end

[/lua]

I think the solution will be building the long line out of smaller lines (maybe 100 vertices each) but if there is a hard limit, it would be good to know.

I think the problem lies elsewhere in your code.  I just put this in a empty main.lua and it worked fine

  local outline = display.newLine( 1,1,2,2 )   for i = 1,400 do     for j = 1,800 do       outline:append( j+i,i )     end   end

that’s 320k vertices drawn with no errors

You are right Adrian!  I’m generating a huge landscape using a polygon with a graphic fill, a chain physics body, 1k ground cover objects and an outline.  I noticed a delay in the app loading as the CPU processes all those details.  I decided to put a 1-second delay on the code generating the outline (display.newLine) and that did the trick.  I’m not sure if I can streamline the code to load the display environment more efficiently but I’m happy to have this workaround for now.  Thanks!

Remember devices have much less CPU that dev machines so make sure you regularly test on medium spec devices if you are aiming for mobile.

If you need any optimising I can try and help.  What I had to do in my game (as the playing area is so large) is handle the culling of assets manually before the Corona render pipeline kicks in.  I split my game into lots of smaller quads and worked out if the quad was visible or not and showing/hiding all the affected assets accordingly.  This proves much faster than just expecting Corona to work out whether to draw or not.  Some form of partitioning might help you?

I’ve got a lot to tinkering to do.  My thought was to make a huge polygon and fill it with a tiled texture to save memory and create a seamless landscape.  Originally, I was using smaller landscapes and tiling them together.  Perhaps a little bit of each approach would be optimal ; larger landscape chunks, culled to a reasonable size.

How much texture memory does Designer City use at its peak?  How much system or main memory is used?  Those would be good tested benchmarks for me to aim for.

For retina devices the standard load is roughly 90MB Lua memory and 30-100MB texture memory depending on the variety of buildings placed. For non retina devices quarter the texture memory.  Yeah, I have real trouble with 512MB devices and anything made by Alcatel.

This obviously grows when overlay scenes are loaded.

The biggest change I’ve done recently is load all (used) images into a texture buffer and then paint on demand.  This is much less memory intensive and a chunk faster on render speed.

I tried making all my assets as atlases but the sum was 300MB+ and pointless - why load assets not being used?  My game engine loads resources strictly on demand.  If player loads and collect resources and closes (a common usage pattern) then the game doesn’t bother loading resources that would render outside the current screen dimensions.

Note: a chunk of this logic is currently being beta tested and will be released next week.

Current production memory requirements are some 25% higher than the aforementioned stats.  Optimisation is always ongoing!

Thanks!  That is really helpful to know.

Feel free to ping me some code and I’ll test it for you… I can’t guarantee timings as I’m real busy but my email is adrian@spheregamestudios.com

Awesome, thanks!

Are you creating 1k individual lines or one really really long line?  If the latter are you adding the vertices on creation or by appending them?

If there is a hard limit (Corona would need to confirm) I wouldn’t expect it to change.  It maybe more a memory limitation if it keeps changing?

Good questions.  It’s one long line, initially created as a single short line and then appended hundreds of times.

The vertices are in one long table with the odds being x coordinates and the evens being y coordinates

I misspoke in my original post, 975 to 1070 is the number of values in the table, not vertices (divide by 2 for vertices).

[lua]

local function createOutline( vertices)

    local v = vertices –  table with about 1,000 values

    

    local outline = display.newLine( v[1], v[2], v[3], v[4] )  – first 2 vertices

    for i=5,#v - 3,2 do  – start at the 3rd set of vertices and count by 2

        outline:append( v[i], v[i+1] )

    end

end

[/lua]

I think the solution will be building the long line out of smaller lines (maybe 100 vertices each) but if there is a hard limit, it would be good to know.

I think the problem lies elsewhere in your code.  I just put this in a empty main.lua and it worked fine

  local outline = display.newLine( 1,1,2,2 )   for i = 1,400 do     for j = 1,800 do       outline:append( j+i,i )     end   end

that’s 320k vertices drawn with no errors

You are right Adrian!  I’m generating a huge landscape using a polygon with a graphic fill, a chain physics body, 1k ground cover objects and an outline.  I noticed a delay in the app loading as the CPU processes all those details.  I decided to put a 1-second delay on the code generating the outline (display.newLine) and that did the trick.  I’m not sure if I can streamline the code to load the display environment more efficiently but I’m happy to have this workaround for now.  Thanks!

Remember devices have much less CPU that dev machines so make sure you regularly test on medium spec devices if you are aiming for mobile.

If you need any optimising I can try and help.  What I had to do in my game (as the playing area is so large) is handle the culling of assets manually before the Corona render pipeline kicks in.  I split my game into lots of smaller quads and worked out if the quad was visible or not and showing/hiding all the affected assets accordingly.  This proves much faster than just expecting Corona to work out whether to draw or not.  Some form of partitioning might help you?

I’ve got a lot to tinkering to do.  My thought was to make a huge polygon and fill it with a tiled texture to save memory and create a seamless landscape.  Originally, I was using smaller landscapes and tiling them together.  Perhaps a little bit of each approach would be optimal ; larger landscape chunks, culled to a reasonable size.

How much texture memory does Designer City use at its peak?  How much system or main memory is used?  Those would be good tested benchmarks for me to aim for.

For retina devices the standard load is roughly 90MB Lua memory and 30-100MB texture memory depending on the variety of buildings placed. For non retina devices quarter the texture memory.  Yeah, I have real trouble with 512MB devices and anything made by Alcatel.

This obviously grows when overlay scenes are loaded.

The biggest change I’ve done recently is load all (used) images into a texture buffer and then paint on demand.  This is much less memory intensive and a chunk faster on render speed.

I tried making all my assets as atlases but the sum was 300MB+ and pointless - why load assets not being used?  My game engine loads resources strictly on demand.  If player loads and collect resources and closes (a common usage pattern) then the game doesn’t bother loading resources that would render outside the current screen dimensions.

Note: a chunk of this logic is currently being beta tested and will be released next week.

Current production memory requirements are some 25% higher than the aforementioned stats.  Optimisation is always ongoing!

Thanks!  That is really helpful to know.

Feel free to ping me some code and I’ll test it for you… I can’t guarantee timings as I’m real busy but my email is adrian@spheregamestudios.com