There seem to be an api limit to the size a table can have before triggering an error.
I’ve seen this before when sending in a body using soap, so perhaps it is a related issue.
While it is possible to design a workaround, it would be great if instead the coronium api could automatically split and merge the payload.
EDIT 1 - I am uncertain about this, so will check further and edit this post once conclusive
EDIT 2 - Yes there is a limit and it is very small, much smaller than the soap body.
To clarify, I am sending in a table with about 20 columns.
When using soap body I was able to send in around 1000 rows per call.
With coronium the limit sits somewhere between 20 and 25 rows.
That is a major bump for me, so an automatic api split and merge would be a godsend.
A max string size issue perhaps?
EDIT 3 - Tried it with a table with only a single column, with the same error, so then string length is maybe not the issue?
EDIT 4 - Tried to encode/decode the tables as json in a feeble attempt to see if it matters, but i does not.
This is the log entry:
2017/09/23 08:12:48 [warn] 1596#0: *231 a client request body is buffered to a temporary file /usr/local/coronium/nginx/client_body_temp/0000000015, client: neutralized IP, server: , request: “POST / HTTP/1.1”, host: “neutralized IP:10001”
2017/09/23 08:12:48 [error] 1596#0: *231 lua entry thread aborted: runtime error: [string “coronium.input”]:0: Expected value but found invalid token at character 1
stack traceback:
coroutine 0:
[C]: in function ‘’
[string “coronium.input”]: in function ‘request’
content_by_lua(coronium.locations.conf:111):2: in function <content_by_lua(coronium.locations.conf:111):1>, client: neutralized IP, server: , request: “POST / HTTP/1.1”, host: “neutralized IP:10001”