Look up speed for large tables, is it worth optimising?

I am at the stage where I am re-writing and optimising my tile engine (with a possible view on selling it, but that is not a topic for now, suffice to say my tile engine doesn’t slow down drawing regardless of map size).

The simplest way for me to set up a map is in tiled and use the export to LUA option.
However, the actual tile data is exported as a linear table, not a nested set.

Now while the maths to convert between them is trivial, it has led me to wonder about the speed implications of setting up the tile data in various formats.

The one crucial question for me is whether LUA is clever enough to realise that a table is purely a linear list of elements (eg we start at [1] and go up to [n] or whatever). If this is the case, then apart from memory concerns it might not matter how big the table itself is (since the lookup could just b index * element pointer size).

Or, whether I am better off pre-processing the data into smaller chunks. The options would be something like:

  1. Leave as a single list - I do my look ups as [(x - 1) * mapwidth + y] or something similar
  2. Process it into an [x][y] list
  3. Subdivide further (assuming the lookups are a bit prohibitive, IE I want to help them where possible)

Number 2 would be the simplest from a coding point of view (in regards to reading and writing the data in the game itself) but really I’d like to avoid pre-processing if possible for a variety of reasons, even though the individual lookups themselves become slower from my side of things (but I can optimise around that).

The reason this is important is to keep out pre-processing if at all possible to make testing levels (both myself and others) much simpler.
However, it may become critical because I am potentially dealing with maps with many tens of thousands of elements (or higher, must remake zelda!), and I need to maintain draw speed.

[import]uid: 46639 topic_id: 19943 reply_id: 319943[/import]

I can’t speak for Corona’s implementation, but Lua is highly optimized for dealing with its native data structures. Even though arrays in Lua are merely tables, they are tables optimized for 1…n lookup. I’m guessing you shouldn’t need to do much in the way of optimizing table structures, but please let me know what you discover. [import]uid: 71767 topic_id: 19943 reply_id: 77679[/import]

If you haven’t checked this out, it’s a worthwhile read: Lua Performance Tips by Roberto Ierusalimschy (creator of lua) http://www.lua.org/gems/sample.pdf

It has a section covering tables. [import]uid: 27183 topic_id: 19943 reply_id: 77682[/import]

Thanks a lot guys.
That link below did confirm I need not worry about organising the data, and in fact I was being silly, I’d forgotten you can traverse a list like this using just addition and subraction within the core XY loops, so really speed is not an issue now.

Awesome! [import]uid: 46639 topic_id: 19943 reply_id: 77759[/import]